Document Type

Conference Paper

Rights

This item is available under a Creative Commons License for non-commercial use only

Disciplines

1.2 COMPUTER AND INFORMATION SCIENCE, Computer Sciences

Publication Details

CEUR Workshop Proceedings

Volume 2086, 2017, Pages 50-62 25th Irish Conference on Artificial Intelligence and Cognitive Science, AICS 2017; Dublin; Ireland; 7 December 2017 through 8 December 2017

Abstract

With the increasing amounts of textual data being collected online, automated text classification techniques are becoming increasingly important. However, a lot of this data is in the form of short-text with just a handful of terms per document (e.g. Text messages, tweets or Facebook posts). This data is generally too sparse and noisy to obtain satisfactory classification. Two techniques which aim to alleviate this problem are Latent Dirichlet Allocation (LDA) and Formal Concept Analysis (FCA). Both techniques have been shown to improve the performance of short-text classification by reducing the sparsity of the input data. The relative performance of classifiers that have been enhanced using each technique has not been directly compared so, to address this issue, this work presents an experiment to compare them, using supervised models. It has shown that FCA leads to a much higher degree of correlation among terms than LDA and initially gives lower classification accuracy. However, once a subset of features is selected for training, the FCA models can outperform those trained on LDA expanded data. © 2017 CEUR-WS. All rights reserved.

DOI

10.21427/D79N6T

Share

COinS