Like Me? Follow Me.
We recently put out a press release ranking various law firms on Twitter, by utilising the Klout score of the firm's main branded presence.
We took some stick for three reasons:
- The list didn't include "twegals", those involved in the legal world that tweet under their own name. I make no apologies for that, the study was solely aimed at tweets from law firm brands - an analysis of twegal rankings may follow.
- The list didn't include any Scottish firms - we subsequently published a list of our view of the top Scottish firms by Klout score here.
- That our ranking was based on Klout scores - different people prefer different methods, but the general consensus amongst the twitterati seems to be that any score is open to manipulation.
This got me wondering how we might overcome the issue of the different ranking sites giving different results.
As I see it, there are three main sites that allow you to assess one's influence on twitter: Klout, PeerIndex and Twitalyzer (cue first complaint on the post as there are bound to be others - but I need to draw the line somewhere). Each of these sites gives a score, but each one's algorithm varies slightly, which delivers slightly different results.
Looking at what's come out of a couple of days of healthy debate, we were able to narrow down the list of law firms from the UK that people thought might potentially make the top 10, which gave us a long list of 37 firms.
We then ran these firms through the various sites to generate top 10's for each engine (as at 16.30 on 4/10/11) - as you can see from the table below the results are broadly similar but varied.
Now, I agree that it is relatively easy to manipulate one of these indices in isolation, but I think it's relatively hard to try to cheat all three at once.
This led me to consider that if we can perhaps aggregate all three indices, we would get a more balanced and perhaps acceptable view.
The problem to me seemed to be that we needed a measure of the relative strength of each index, in order to assign a weight to each of the different engine's scores.
The answer I propose is to consider how each of these indices think of each other. We can use these to generate a relative weighting for each platform based on their share of the total scores:
To explain, the weighting for Twitalyzer was calculated by dividing 160 by 571.
The final stage of my analysis then subjected each of the firm's scores to the weighting factor, in order to generate something of a consensus view of the ranking.
The resulting top ten is as follows:
Here come the caveats:
- Law Firm Brand tweet Accounts only
- Scores as at 16.30 on 4th Oct 2011