Whenever I see news reports about anything related to HIV, I have to brace myself for the parts with numbers. Not because I'm afraid of numbers – I'm geeky enough to have participated in math contests when I was in high school, and even won prizes at them (math books). No, I brace myself for what people will do with numbers to try to validate their points of view.
Most of the uses of numbers that leave me shaking my head involve a lack of context. A classic is the use of percentages to describe trends in HIV infections. First of all, one should be clear that in our society without compulsory and regular testing of the entire population, we are usually talking about statistics regarding diagnoses, and only sometimes about estimates of actual HIV infections. Second, one can't compare on pure percentage changes in these diagnoses without suggesting that an increase of diagnoses in a particular group from 2 to 4 (100%) is somehow more significant than an increase in another group from 300 to 303 (1%). If you see the percentages, you need to also look for the numbers.
When you don't see the absolute numbers, you might not understand other aspects of the meaning of what you are seeing. It is all well and good to say that gay men are an increasing portion of the new diagnoses, but are they a bigger slice of a shrinking or stable pie? Could those absolute numbers also be decreasing, while the percentage goes up? The percentages won't tell you that on their own.
There are other times when the absolute numbers on their own don't tell the whole story. I remember my own reaction in the context of a national meeting once when someone from a more rural area talked about a "huge" increase in people being seen in the local AIDS organization. The number was 4 or 5 in the past year. The organization I worked for at the time regularly welcomed between 40 and 60 new HIV positive people every year, and it was only one of almost twenty organizations in the city. In the context of the population of the region being served, however, those 4 or 5 people were probably very significant.
Another little statistical game occurs in the classifying of the data. It is a very difficult task to classify people when we are talking about those numbers of new diagnoses. People don't stay in their own boxes, so might just fit into several different categories or might even justify a new category that epidemiologists and those responsible for surveillance are not ready to create (it's difficult to follow trends when you keep splitting the lines into their sub-categories). There's more than a little interpretation involved in the classifying, so it's worth asking questions about the results.
One of the things that has most annoyed me recently is the interpretation of how prevalent condom use is among gay men. We all know the community started at zero – or almost zero – condom use at the beginning of the 1980s, adopted the condom strategy extremely successfully, and that use seems to have declined lately. When the proof of that decline is shrouded in odd definitions, however, I get suspicious. I recently saw one definition that classified people into two groups: those who have consistently used condoms in the last six months, and those who had at least one incident of not using a condom in the last six months. Now suddenly the portrait is of condom-users and condom-eschewers and the person who had sex sixty times in the last six months, only once without a condom, finds himself in the latter group. That's how you get to a rate of consistent condom use somewhere south of 30%, but it doesn't seem to be a very accurate portrait, does it?
Although I have not entered any math contests lately, I reiterate my geeky childhood love of numbers. I just have to add to that love a cheeky appreciation of context and a freaky suspicion that will always drive me to see what else my little number friends might be telling me. Or hiding from me.
This article is also published on Positive Lite here.