Checking competitive intelligence for accuracy & reliability
One of the problems for competitive intelligence is that sources are often unreliable or inaccurate. Even worse, sometimes this is deliberate – the term vapourware in the computer software industry gives an idea of this, where companies leak information about forthcoming products with the aim of forestalling competitor products.
Ranking & Scoring sources and data
One approach to solving this is to use a ranking system, where you allocate points for each item depending on factors such as:
- has the source been reliable in the past;
- is the information corroborated by other, independent, sources;
- does the information make sense, or fit with what I already know about the topic.
One system uses a 5-point scoring system where
- the source of the information is ranked from A-E, where an “A” type source would be almost always or totally reliable – ranging to a “D” type source that would generally be unreliable. (“E” type sources are those where you cannot assess the reliability of the source).
- the information gathered is ranked from 1-5, where information given a “1” category would be viewed as almost certainly true, “2” would be probably true, “3” probably untrue, “4” almost certainly untrue, and “5” would be assigned to information which was impossible to validate as either true or false.
Using this scheme, information that matched or confirmed other information from a source that was almost always reliable would be given an A1 rating. Conversely a rumour from a generally reliable source might receive a A5 or B5 rating. The same rumour spread as gossip by the water-cooler or in the bar might get an E5 rating.
The trouble is that even where information is suspect (e.g a D4 rating) does not mean that it is wrong, while information from the best sources that matches existing knowledge may be. In the latter case it could be as a result of not challenging assumptions and received knowledge, or checking information adequately – perennial dangers for the CI analyst.
A case example
As an example, in the UK, the Times and Sunday Times newspapers are generally viewed as a reliable source i.e. an “A type” or “B” type source (depending on the article author, and type of article). On the 21 April, 2002, the Sunday Times published a news story about the actress Julie Christie (who appeared in classic ‘70s films such as The Go-Between and Dr Zhivago). The article gave her age and mentioned a son, among other details. The following week (28 April 2002) there was a letter from Ms. Christierefuting most of the news story, and denying that she had a son. The Sunday Times’s response was to blame a normally reliable web site for some of the information and that the International Who’s Who and The Film Encyclopaedia gave different birthdays. The point is that if major directories and leading newspapers get it wrong, who can you trust? The answer could be nobody, but this would be incorrect. Part of the role of the CI analyst is to get behind media and corporate obfuscation to the truth. This involves using multiple sources, analysis to identify links and correlations, and finally primary research.
Ultimately, the best information is generally going to come from the original information source or subject (in the above example, Julie Christie herself) and this is where confirmation should be sought. (Although sometimes sources may deliberately try and confuse by giving out false information. The CI analyst needs to be able to interpret this and identify when this occurs also).
Only where it is not possible to go to the original or primary source (due to a variety of reasons, including legal and ethical considerations) should secondary sources be relied on, and in these cases the analyst needs to take extra care in interpreting the data.
Note: This FAQ was originally published in the Strategic & Competitive Intelligence Professional‘s membership magazine (Competitive Intelligence Magazine – Jul-Aug 2002)