INVESTIGATING SEMANTIC DIFFERENCES IN USER-GENERATED CONTENT BY CROSS-DOMAIN SENTIMENT ANALYSIS MEANS

Investigating Semantic Differences in User-Generated Content by Cross-Domain Sentiment Analysis Means

Investigating Semantic Differences in User-Generated Content by Cross-Domain Sentiment Analysis Means

Blog Article

Sentiment analysis of domain-specific short messages (DSSMs) raises challenges due to their peculiar nature, which can often include field-specific terminology, jargon, and here abbreviations.In this paper, we investigate the distinctive characteristics of user-generated content across multiple domains, with DSSMs serving as the central point.With cross-domain models on the rise, we examine the capability of the models to accurately interpret hidden meanings embedded in domain-specific terminology.For our investigation, we utilize three different community platform datasets: a Jira dataset for DSSMs as it contains particular vocabulary related to software engineering, a Twitter dataset for domain-independent short messages (DISMs) because it holds everyday speech type of language, and a Reddit dataset as an intermediary case.

Through machine learning techniques, we thus explore whether software engineering short messages exhibit notable differences compared to regular messages.For this, we utilized the cross-domain knowledge transfer approach and RoBERTa sentiment analysis technique to prove the existence of efficient models in addressing DSSMs challenges across multiple here domains.Our study reveals that DSSMs are semantically different from DISMs due to F1 score differences generated by the models.

Report this page