Method and isFigure 1. The infrastructure of our differential SC144 solubility language analysis. 1) Feature Extraction. Language use features include: (a) words and phrases: a sequence of 1 to 3 words found using an emoticon-aware tokenizer and a collocation filter (24,530 features) (b) topics: automatically derived groups of words for a single topic found using the Latent Dirichlet Allocation technique [72,75] (500 features). 2) Correlational Analysis. We find the correlation (b of ordinary least square linear regression) between each language feature and each demographic or psychometric outcome. All relationships presented in this work are at least significant at a Bonferroni-corrected pv0:001 [76]. 3) Visualization. Graphical representation of correlational analysis output. doi:10.1371/journal.pone.0073791.gPLOS ONE | www.plosone.orgPersonality, Gender, Age in Social Media Languageopen-vocabulary ?the words and clusters of words analyzed are determined by the data itself.Closed Vocabulary: Word-Category LexicaA common method for linking language with psychological variables involves counting words belonging to manually-created categories of language. Sometimes referred to as the word-count approach, one GSK343 chemical information counts how often words in a given category are used by an individual, the percentage of the participants’ words which are from the given category: P p (category j subject)word[categoryIn practice, we kept phrases with pmi values greater than 2 ?length, where length is the number of words contained in the phrase, ensuring that phrases we do keep are informative parts of speech and not just accidental juxtapositions. All word and phrase counts are normalized by each subject’s total word use (p(word j subject)), and we apply the Anscombe transformation [71] to the normalized values for variance stabilization (pans ): p(phrase j subject) freq (phrase, subject) P freq (phrase0 , subject)phrase0 [vocab(subject)freq (word, subject) freq (word, subject) pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pans (phrase j subject) 2 p(phrase j subject)z3=8 where vocab(subject) returns a list of all words and phrases used by that subject. These Anscombe transformed “relative frequencies” of words or phrases (pAns ) are then used as the independent variables in all our analyses. Lastly, we restrict our analysis to those words and phrases which are used by at least 1 of our subjects, keeping the focus on common language. The second type of linguistic feature, topics, consists of word clusters created using Latent Dirichlet Allocation (LDA) [72,73]. The LDA generative model assumes that documents (i.e. Facebook messages) contain a combination of topics, and that topics are a distribution of words; since the words in a document are known, the latent variable of topics can be estimated through Gibbs sampling [74]. We use an implementation of the LDA algorithm provided by the Mallet package [75], adjusting one parameter (alpha 0:30) to favor fewer topics per document, since individual Facebook status updates tend to contain fewer topics than the typical documents (newspaper or encyclopedia articles) to which LDA is applied. All other parameters were kept at their default. An example of such a model is the following sets of words (tuesday, monday, wednesday, friday, thursday, week, sunday, saturday) which clusters together days of the week purely by exploiting their si.Method and isFigure 1. The infrastructure of our differential language analysis. 1) Feature Extraction. Language use features include: (a) words and phrases: a sequence of 1 to 3 words found using an emoticon-aware tokenizer and a collocation filter (24,530 features) (b) topics: automatically derived groups of words for a single topic found using the Latent Dirichlet Allocation technique [72,75] (500 features). 2) Correlational Analysis. We find the correlation (b of ordinary least square linear regression) between each language feature and each demographic or psychometric outcome. All relationships presented in this work are at least significant at a Bonferroni-corrected pv0:001 [76]. 3) Visualization. Graphical representation of correlational analysis output. doi:10.1371/journal.pone.0073791.gPLOS ONE | www.plosone.orgPersonality, Gender, Age in Social Media Languageopen-vocabulary ?the words and clusters of words analyzed are determined by the data itself.Closed Vocabulary: Word-Category LexicaA common method for linking language with psychological variables involves counting words belonging to manually-created categories of language. Sometimes referred to as the word-count approach, one counts how often words in a given category are used by an individual, the percentage of the participants’ words which are from the given category: P p (category j subject)word[categoryIn practice, we kept phrases with pmi values greater than 2 ?length, where length is the number of words contained in the phrase, ensuring that phrases we do keep are informative parts of speech and not just accidental juxtapositions. All word and phrase counts are normalized by each subject’s total word use (p(word j subject)), and we apply the Anscombe transformation [71] to the normalized values for variance stabilization (pans ): p(phrase j subject) freq (phrase, subject) P freq (phrase0 , subject)phrase0 [vocab(subject)freq (word, subject) freq (word, subject) pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pans (phrase j subject) 2 p(phrase j subject)z3=8 where vocab(subject) returns a list of all words and phrases used by that subject. These Anscombe transformed “relative frequencies” of words or phrases (pAns ) are then used as the independent variables in all our analyses. Lastly, we restrict our analysis to those words and phrases which are used by at least 1 of our subjects, keeping the focus on common language. The second type of linguistic feature, topics, consists of word clusters created using Latent Dirichlet Allocation (LDA) [72,73]. The LDA generative model assumes that documents (i.e. Facebook messages) contain a combination of topics, and that topics are a distribution of words; since the words in a document are known, the latent variable of topics can be estimated through Gibbs sampling [74]. We use an implementation of the LDA algorithm provided by the Mallet package [75], adjusting one parameter (alpha 0:30) to favor fewer topics per document, since individual Facebook status updates tend to contain fewer topics than the typical documents (newspaper or encyclopedia articles) to which LDA is applied. All other parameters were kept at their default. An example of such a model is the following sets of words (tuesday, monday, wednesday, friday, thursday, week, sunday, saturday) which clusters together days of the week purely by exploiting their si.
HIV gp120-CD4 gp120-cd4.com
Just another WordPress site