Linguamatics’ linguistic analysis provides immediate insight into tweet sentiment towards party leaders during the final televised UK election debate, April 29 2010. The preliminary results from tweets sent during the debate, including a new view on the instant reactions to particular issues (Figure 1), showed a further narrowing of the gap between the leaders’ performances (Figure 2) but with Nick Clegg still performing best overall.
Figure 1 – How twitterers reacted to particular issues in the final debate
The overall tweet analysis (Figure 2) for the three debates shows the percentage of tweets in favour of each of the leaders. Nick Clegg’s share has dropped from 43% in the second debate to 37%, Gordon Brown down from 35% to 32%, while David Cameron rose from 22% to 31%.
Figure 2 – Number of tweets showing positive sentiment towards each party leader
Top issues for the twitterers in the third debate (Figure 3) were immigration, banking, economy and tax. Clegg and Brown shared the lead on immigration, Clegg was ahead on banking and tax, whilst Brown clearly won on the economy.
Figure 3 – Winner per topic from number of relevant positive tweets
Tracking positive sentiment towards each of the leaders during all three debates (Figure 4) also reflects the narrowing gap between their performances.
Figure 4 – Positive sentiment towards leader over time during the debate
The published results come from the deep analysis of 187,000 tweets sent by 43,656 twitterers from 8.30pm – 10.00pm on the night of the third televised UK election debate.
Linguamatics’ I2E text mining software was used to find and summarize tweets that have the same meaning, however they are worded. I2E identifies the range of vocabulary used in tweets and uses linguistic analysis to collect and summarize the different ways opinion is expressed.
Description of the figures in the press release
Figure 1 shows how the twitterers reacted to particular issues during the debate.
This is a timeline showing the positive tweets made about each leader in relation to audience questions or key statements made by a leader.
Figure 2 shows the number of tweets that expressed a positive sentiment towards each of the party leaders.
The analysis identified tweets saying that a particular leader was doing well or made a good point, or that they like the leader, etc. Linguistic filtering removed examples which were about expectations, e.g. “I hope the leader will do well”, questions, such as “anyone think the leader is doing well?”, and negations, such as “the leader did not do well” or “the leader made no sense”.
Figure 3 shows winner per topic from number of relevant positive tweets.
The analysis identified a list of topics by identifying words or phrases which described the discussion subject, for example Trident, nuclear weapons, armed forces, military, and Eurofighter are assigned to defence. The tweets were then analyzed to find out who was saying positive things about each leader in relation to a specific topic.
Figure 4 shows Figure 1 (positive sentiment towards leaders over time during the debate) compared with the positive sentiment results from the two earlier debates.