Wake up and smell the perks.
Recently, Klout, a startup that measures "influence" in social media, and purveyor of "perks," declared a "new era" because they were adjusting their scoring algorithm. (In case you live under a rock – Klout measures your "influence score" from 0 – 100 based on your social graph data).
Here's now they explained the change, what they called the "biggest step forward in accuracy, transparency and our technology":
Today we're releasing a new scoring model with insights to help you understand changes in your influence. When someone engages with your content, we assess that action in the context of the person's own activity. These principles form the basis of our PeopleRank algorithm which determines your Score...
Confused? You were not alone but when the dust settled many folks lost 10 or 20 points in their Klout score. The reaction to this new scoring model was mixed. There was the "nonplussed" camp – those who were too cool to admit they knew/cared what their score was - much less if it dropped. Not surprisingly, this camp is populated with folks who have day jobs unrelated to content creation, audience building or Twitter.
Then there was the "other camp"- those who felt the new scoring model was not a good thing – not good at all. Unlike the first camp, this group cared very much about their Klout score not out of vanity but for the very practical reason that Klout has a direct impact on their livelihood. This group, largely made up of bloggers, authors and Indies with scores in the 40s to 60s, got clobbered and it didn't go unnoticed because social credibility for these folks equals brand sponsorships, PR outreach, audition opportunities etc.
And almost immediately following the change, we saw tweets from authors who saw a softening of book sales or artists who saw a reduction of downloads. So it's no surprise this is the group that cried "foul" made worse by the fact that most unfortunately, the outer edges of the Klout distribution curve seemed barely to feel any difference. High scoring content distribution machines like Wired and CNET (80+) saw little impact in their scores. Similarly, those at the lower end of the scale also were not much impacted either.
The whole episode seemed to cause one to wonder what was behind Klout's change.
The answer, on a surface level, is actually pretty simple. Klout had to fix the mechanism that allowed too many people (like Twitter monkeys who have 170,000 followers/ followees) to rise way too fast within the Klout scoring system. Clearly that 's not good as any scoring system needs a proper distribution of population sustainable over time. In effect, they had to "reset the dial".
But this re-setting of the scoring dial and the fall out, really underscores the fundamental flaw in any attempt to measure "influence" so dependent on subjective context (a problem I noted back in May.)
In fact, now I am beginning to wonder if ANY externally driven Influence scoring methodology is useful given the complexity in determining who has influence and in what areas, (more than once I was amused at how Klout thought I was expert on "Russia" or "Warfare" – maybe because my blog is called "Trenchwars").
So I also can't help but question whether Klout would do well to not to recalibrate their "Influence" scoring model (which then disqualifies it as a "standard" by any measure) but to recalibrate exactly what they are REALLY measuring. Stripping away techno-buzz, isn't it more accurate to say that Klout is really measuring a person's content distribution capabilities – not their influence at all?
And if you are willing to consider this approach, then it's much easier to see how Klout becomes a far more effective tool for everyone. Using a content syndication model takes this amorphous Influence score and re-expresses it as standardized and actionable media channel m with useful CPM and effective reach metrics.
Personally, this makes so much more sense all around even though it lacks the techno-cool buzz that media loves to talk about.
In the end, its useful to remind ourselves that , as marketers, even if we can measure influence, this still leaves us miles from real prize which is to create trust to drive a sale. I ended my post in May with this observation that 's worth repeating: "In understanding influence – it's the fundamentals of trust that marketers really need to think about. Everything else is noise."
With all this noise – I think I'm getting a headache.