Twitter数据的文本分析

123 阅读5分钟

导入库

www.kaggle.com/code/errear…

In [1]:

import os
import pandas as pd
import numpy as np
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import plotly as py
import plotly.graph_objs as go

import gensim
from gensim import corpora, models, similarities
import logging
import tempfile

import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords

from string import punctuation
from collections import OrderedDict

import seaborn as sns
import pyLDAvis.gensim  # pip install pyLDAvis
import matplotlib.pyplot as plt
%matplotlib inline

init_notebook_mode(connected=True) # do not miss this line

import warnings

导入数据

In [2]:

df = pd.read_csv("data.csv",encoding="gb18030") 
df.head()

Out[2]:

row IDTweetTimeRetweet fromUser
0Row0@MeltingIce Assuming max acceleration of 2 to ...2017-09-29 17:39:19NaNelonmusk
1Row1RT @SpaceX: BFR is capable of transporting sat...2017-09-29 10:44:54SpaceXelonmusk
2Row2@bigajm Yup :)2017-09-29 10:39:57NaNelonmusk
3Row3Part 2 t.co/8Fvu57muhM2017-09-29 09:56:12NaNelonmusk
4Row4Fly to most places on Earth in under 30 mins a...2017-09-29 09:19:21NaNelonmusk

In [3]:

df.shape  # 数据量

Out[3]:

(3218, 5)

查看数据中的缺失值情况:

In [4]:

df.isnull().sum()

Out[4]:

row ID             0
Tweet              0
Time               0
Retweet from    2693
User               0
dtype: int64

数据预处理

In [5]:

df.dtypes  # 转换前

Out[5]:

row ID          object
Tweet           object
Time            object
Retweet from    object
User            object
dtype: object

时间字段的转换:

In [6]:

df["Time"] = pd.to_datetime(df["Time"])  # 转换成时间格式  

In [7]:

df.dtypes  # 转换后

Out[7]:

row ID                  object
Tweet                   object
Time            datetime64[ns]
Retweet from            object
User                    object
dtype: object

In [8]:

df["Time"] = pd.to_datetime(df['Time'], format='%y-%m-%d %H:%M:%S')
df.head()

Out[8]:

row IDTweetTimeRetweet fromUser
0Row0@MeltingIce Assuming max acceleration of 2 to ...2017-09-29 17:39:19NaNelonmusk
1Row1RT @SpaceX: BFR is capable of transporting sat...2017-09-29 10:44:54SpaceXelonmusk
2Row2@bigajm Yup :)2017-09-29 10:39:57NaNelonmusk
3Row3Part 2 t.co/8Fvu57muhM2017-09-29 09:56:12NaNelonmusk
4Row4Fly to most places on Earth in under 30 mins a...2017-09-29 09:19:21NaNelonmusk

In [9]:

df.drop("row ID", axis=1, inplace=True)   

不同year对比

In [10]:

tweetsdata = df["Time"]
tweetsdata

Out[10]:

0      2017-09-29 17:39:19
1      2017-09-29 10:44:54
2      2017-09-29 10:39:57
3      2017-09-29 09:56:12
4      2017-09-29 09:19:21
               ...        
3213   2012-11-20 08:52:03
3214   2012-11-20 08:38:31
3215   2012-11-20 08:30:44
3216   2012-11-19 08:59:46
3217   2012-11-16 17:59:47
Name: Time, Length: 3218, dtype: datetime64[ns]

In [11]:

trace = go.Histogram(  # 绘制数据
    x = tweetsdata,  # x 轴的数据
    marker = dict(color="blue"),  # 柱子颜色
    opacity = 0.75  # 透明度设置
)

layout = go.Layout(  # 整体布局
    title = "Tweet Activity Over Years",  # 标题-高-宽-xy轴标题-柱子间隔
    height=450,
    width=1200,
    xaxis=dict(title='Month and year'),
    yaxis=dict(title='Tweet Quantity'),
    bargap=0.2)

data = [trace]

fig = go.Figure(data=data, layout=layout)

fig.show()

语料处理

准备文本列表

准备好语料库corpus:

In [12]:

corpus = df["Tweet"].tolist()
corpus[:5]

Out[12]:

["@MeltingIce Assuming max acceleration of 2 to 3 g's, but in a comfortable direction. Will feel like a mild to moder? https://t.co/fpjmEgrHfC",
 'RT @SpaceX: BFR is capable of transporting satellites to orbit, crew and cargo to the @Space_Station and completing missions to the Moon an?',
 '@bigajm Yup :)',
 'Part 2 https://t.co/8Fvu57muhM',
 'Fly to most places on Earth in under 30 mins and anywhere in under 60. Cost per seat should be? https://t.co/dGYDdGttYd']

In [13]:

import os  
  
TEMP_FOLDER = os.getcwd()  # 当前目录

停用词处理

In [14]:

list1 = ['RT','rt']

stoplist = stopwords.words('english') + list(punctuation) + list1

stoplist[:10]

Out[14]:

['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're"]

获取words

下面的代码表示:

  • str(document).lower().split():文档全部转成字符串 + 全部小写 + 空字符的切割,生成一个个切割后的单词

In [15]:

texts = [[word for word in str(document).lower().split() if word not in stoplist] for document in corpus]
print(texts[0])
['@meltingice', 'assuming', 'max', 'acceleration', '2', '3', "g's,", 'comfortable', 'direction.', 'feel', 'like', 'mild', 'moder?', 'https://t.co/fpjmegrhfc']

将单词用词袋表示,并且存储在指定路径下:

In [16]:

dictionary = corpora.Dictionary(texts)
dictionary.save(os.path.join(TEMP_FOLDER, 'elon.dict')) 

获取每个单词对应的id序号:

In [17]:

dictionary.token2id  # 获取每个单词对应的id序号

{'2': 0,
 '3': 1,
 '@meltingice': 2,
 'acceleration': 3,
 'assuming': 4,
 'comfortable': 5,
 'direction.': 6,
 'feel': 7,
 "g's,": 8,
 'https://t.co/fpjmegrhfc': 9,
 'like': 10,
 'max': 11,
 'mild': 12,
 'moder?': 13,
 '@space_station': 14,
 ......
 }

词袋表示

生成语料corpus内容:将单词转换成词袋表示

In [18]:

corpus = [dictionary.doc2bow(text) for text in texts] 
corpus[:2]

Out[18]:

[[(0, 1),
  (1, 1),
  (2, 1),
  (3, 1),
  (4, 1),
  (5, 1),
  (6, 1),
  (7, 1),
  (8, 1),
  (9, 1),
  (10, 1),
  (11, 1),
  (12, 1),
  (13, 1)],
 [(14, 1),
  (15, 1),
  (16, 1),
  (17, 1),
  (18, 1),
  (19, 1),
  (20, 1),
  (21, 1),
  (22, 1),
  (23, 1),
  (24, 1),
  (25, 1),
  (26, 1)]]

将已经序列化的语料库保存成文件,需要指定一个路径:

In [19]:

# 语料库的保存
corpora.MmCorpus.serialize(os.path.join(TEMP_FOLDER, 'corpus.mm'), corpus) 

建模

TF-IDF模型

In [20]:

tfidf = models.TfidfModel(corpus)  # 1-模型初始化

corpus_tfidf = tfidf[corpus]  # 2-基于tfidf模型将语料转成向量

LDA模型

In [21]:

total_topics = 5  # 设置5个主题

lda = models.LdaModel(corpus,  # 语料
                      id2word=dictionary,  # 单词与序号的对应字典
                      num_topics=total_topics  # 设置主题数
                     )
corpus_lda = lda[corpus_tfidf] 

In [22]:

lda.show_topics(total_topics, 3)

Out[22]:

[(0, '0.006*"..." + 0.006*"tesla" + 0.005*"model"'),
 (1, '0.012*"launch" + 0.011*"falcon" + 0.009*"@spacex:"'),
 (2, '0.014*"tesla" + 0.006*"model" + 0.005*"new"'),
 (3, '0.011*"model" + 0.006*"good" + 0.006*"tesla"'),
 (4, '0.008*"tesla" + 0.006*"model" + 0.005*"w"')]

In [23]:

data_lda = {i: OrderedDict(lda.show_topic(i,25)) for i in range(total_topics)}      
data_lda

Out[23]

{0: OrderedDict([('...', 0.006462383),
              ('tesla', 0.005584647),
              ('model', 0.0048239143),
              ('new', 0.004051302),
              ('next', 0.003930719),
              ('great', 0.0030215571),
              ('good', 0.002984404),
              ('miles', 0.0029328458),
              ('like', 0.002857408),
              ("i'm", 0.0027793457),
              ('rocket', 0.0025001287),
              ('back', 0.0024146684),
              ('@elonmusk', 0.0023003744),
              ('long', 0.0022880563),
              ('super', 0.0022213901),
              ('@spacex', 0.0022024196),
              ('flight', 0.0021213787),
                 ......
                ])}

生成数据

In [24]:

df_lda = pd.DataFrame(data_lda)
df_lda.head()

Out[24]:

01234
...0.006462NaNNaNNaNNaN
tesla0.0055850.0055500.0144810.0055850.008305
model0.0048240.0020160.0059600.0105750.006079
new0.004051NaN0.004858NaN0.001924
next0.003931NaN0.0020500.004409NaN

In [25]:

df_lda = df_lda.fillna(0).T
df_lda

Out[25]:

...teslamodelnewnextgreatgoodmileslikei'm...vstimeyeah,softwarepeople2yesrangecoolyes,
00.0064620.0055850.0048240.0040510.0039310.0030220.0029840.0029330.0028570.002779...0.0000000.0000000.000000.0000000.0000000.0000000.0000000.0000000.0000000.000000
10.0000000.0055500.0020160.0000000.0000000.0000000.0000000.0000000.0026520.000000...0.0000000.0000000.000000.0000000.0000000.0000000.0000000.0000000.0000000.000000
20.0000000.0144810.0059600.0048580.0020500.0000000.0018970.0000000.0024960.000000...0.0000000.0000000.000000.0000000.0000000.0000000.0000000.0000000.0000000.000000
30.0000000.0055850.0105750.0000000.0044090.0000000.0057390.0000000.0050940.000000...0.0019730.0000000.000000.0000000.0000000.0000000.0000000.0000000.0000000.000000
40.0000000.0083050.0060790.0019240.0000000.0021560.0022710.0000000.0021870.000000...0.0000000.0038430.002770.0025910.0025830.0021470.0021430.0020910.0020560.002036

5 rows × 80 columns

LDA可视化

clustermap图

显示不同单词之间的相关性

In [26]:

g = sns.clustermap(df_lda.corr(), # 相关系数
                center = 0,
                standard_scale = 1,
                cmap = "RdBu",
                metric = "cosine",
                linewidths = 0.75,
                figsize = (10,10)
               )

plt.setp(g.ax_heatmap.yaxis.get_majorticklabels(), rotation=0)
plt.show()

LDA可视化

In [27]:

pyLDAvis.enable_notebook()

panel = pyLDAvis.gensim.prepare(lda, corpus_lda, dictionary, mds='tsne')
panel