In the year 2007, the Ministry of Women and Child Development published a report called “Study on Child Abuse: India 2007” (“Report” http://wcd.nic.in/childabuse.pdf). The report was published with the involvement and support of a number of non-governmental organisations (NGOs) and some mental health consultants, led by the United Nations International Children’s Emergency Fund (“UNICEF”), Save the Children and an Indian NGO called “Prayas”.
The logos of UNICEF, Save the Children and Prayas are stamped on the report. Their officials are named as part of its research team.
As is well known, UNICEF is an international United Nations agency and Save the Children, is an international NGO.
POSITION OF UNICEF-SAVE THE CHILDREN ON CHILD POLICY
UNICEF and Save the Children advocate an approach to child welfare whereby the welfare of the child is framed as a question of the legal and human rights of the child; rights that it can claim directly from the government. The idea is that children should be enabled to speak for themselves.
A key tenet of this approach to child welfare is that in societies where children are seen as the sole or primary responsibility of the family, their voice is suppressed by authoritarian parents who do not see them as individuals in their own right.
A key strategy of UNICEF, Save the Children and allied child rights bodies in propagating this vision of child welfare, is to claim that families are widely abusive of children: sexually, emotionally and physically. On this basis, the argument goes, there is urgent necessity to put in place governmental institutions and processes that will allow children to escape the family, and allow the government to supersede the family in enabling children to do so. These government institutions and processes are given the name “child protection”.
In the world of child rights, therefore, “child protection” is a term of art, a technical term for a certain type of governmental machinery empowered to supersede the family in taking decisions and actions about children.
WHY WE HAVE MADE A STUDY OF THE 2007 REPORT
For reasons discussed in the body of our paper, we believe that the Ministry of Women and Child Development, and hence the Government of India, has endorsed and adopted the UNICEF-Save the Children version of child protection. Although their specific recommendations on child protection are yet to be fully implemented by the Government, many steps have been taken to lay down the institutional infrastructure and legal provisions for it.
The report discussed in this paper spans the entire UNICEF-Save the Children framework of child protection: dealing with physical abuse, sexual abuse, emotional abuse, the case for police and government intervention, and the reasons why the family fails to protect children, and in most cases perpetrates and abets, child abuse.
The report is a foundational document of the Ministry of Women and Child Development. It was one of the first projects to be undertaken when this ministry was formed in the year 2006, by taking the Department of Women and Child Development out of the purview of the Ministry of Human Resources, to be constituted into a ministry in its own right.
At this point in time, the newly formed ministry was looking for a framework, or a basic vision, on which to define itself, and set the agenda for women and children in India. The report lays down some of the basic concepts based on which the ministry has since devised policies and suggested laws regarding children.
As we will see in the paper, the basic position of the report is that the children of India urgently require “protection” and “rights” against adult members of society, in particular their parents, teachers and members of their community. The report argues that governmental intervention is required because Indian families are patriarchal, violent, stifle the voice of children, leave children vulnerable to sexual abuse, and are hostile, neglectful and discriminatory of the girl child. In this manner, according to the report, the rights of Indian children are being violated, and the safety of Indian children is being compromised.
The harsh condemnation of Indian family and society is justified in the report by claiming that there are alarmingly high rates of abuse of children in India, with almost every other child suffering some form of abuse, the majority of which is perpetrated by family members. This is, of-course, of a piece with the key tenets and strategy, mentioned above, of the UNICEF-Save the Children approach to and advocacy of child protection.
In this paper we will attempt to demonstrate how the high rates of child abuse claimed by the report were rigged by manipulating statistics and exaggerating the responses of respondents surveyed for the report.
We analyse the report as an example of everything that is debatable, misrepresented and blatantly false in the UNICEF-Save the Children presentation of the state of Indian children, and their suggested solution in the form of western-style child protection, which the government has apparently endorsed.
We intend to take out a series of papers to engage people, especially ordinary people with children and families to care for, on the issue of governmental interference in the raising of children, egged on by racist and anti-family international NGOs. This paper is our first step in that direction.
BASIC STATISTICAL ERRORS
In this part of our paper we discuss errors in the statistical methods and consequent flaws in the data gathered for the report. Broadly, these involved the incorrect use of purposive sampling to make nation-wide claims, unsystematic and arbitrary choice of respondents, and a failure to account for admitted errors and flaws in the quality of data obtained.
MISUSE OF PURPOSIVE SAMPLING
The report is based on a survey of 17,220 respondents of which 12,447 were children of ages five to (under) 18 years; 2324 were young adults of ages 18 to 24 years and 2449 were so-called “stakeholders” (including teachers and NGO workers). The report states that a method known as “purposive sampling” was used for identifying respondents for the survey.
For the uninitiated, purposive sampling, also known as “non-probability” sampling, is not a sampling method used to make generalisations or predict probabilities of behaviour for an entire population based on the sample, but to observe patterns in a small, well-defined class of persons. For research questions spanning an entire nation or subcontinent (such as India), purposive sampling can at best be used for preliminary studies to enable the researcher to better design his research, or articulate the types of questions he needs to ask to test his hypothesis.
But in the report, data gathered from purposive sampling of 17,220 respondents is applied to the entire population of India, including 44 crore Indian children and their families.
The child respondents were sub-divided into five sub-groups of between about 2200 and about 3100 in size. Much of the data that is applied to the whole country is taken from these small sub-groups, and not even from the overall sample of children participating in the survey.
FAILURE TO CORROBORATE FINDINGS
Since to a certain extent, all research, even that using non-probability sampling, makes some generalisations about the group under study, even when purposive sampling is used, the convention is for the researcher to apply a combination of non-probability sampling methods and other verification procedures, to get corroboration for her findings. However, no such steps were carried out by the authors of the report. The report merely quotes in support of a few of its findings, a couple of child abuse studies that were carried out on groups of children to which the respondents surveyed for the report bore no resemblance, except that they were children.
ARBITRARY AND UNSYSTEMATIC SELECTION OF RESPONDENTS
The selection of respondents for the report’s survey was arbitrary and unsystematic. As we will see below, very little, if any, thought appears to have been given towards developing a rationale for the selection of respondents.
Before starting the survey, a target of 18,200 respondents was set for the survey sample. However, in the end data was obtained for only 17,220 respondents. Yet there is no explanation of how this would affect the research or what steps were taken to account for the variation.
In order to arrive at the sample, the country was first divided into six zones: North, South, East, West, Central and North East. The report does not explain the basis for demarcation of the zones. Two states were selected from each zone.
The selection of states and districts within states is said to have been done by comparing “literacy quartiles” and “literacy rates”. Crimes against children as reported in the National Crime Record Bureau were also looked at, it is claimed that all “quartiles of offences/crimes against children” were represented in the states. Based on the above, the following states were selected: Mizoram, Assam, Goa, Delhi, Rajasthan, Uttar Pradesh, Bihar, West Bengal, Maharashtra, Andhra Pradesh, Gujarat, Kerala and Madhya Pradesh.
Hindi-speaking states were disproportionately represented, but there is no accounting for this in the sample selection. None of the hill states, other than Mizoram from the North East, seem to have been included. The report says of Uttarakhand that it was ignored, even though it fell in the uppermost literacy quartile of the Central Zone, because of “problems of accessibility owing to difficult terrain and widely dispersed population.” It need hardly be said that convenience of access is no rational basis for including or excluding a state from the survey.
By cutting out Punjab, the survey practically ignores an entire religious group – the Sikhs. It is puzzling that the report chose to ignore Punjab when the country is worrying over the drug-problem among the youth being reported from there.
In the states left out, entire linguistic communities have been ignored, such as Oriya and Tamil speakers.
Maharashtra is said to have been chosen because it has a large number of children on the streets and at work, showing the assumption of the survey that these are groups that will show higher rates of child abuse. But there was no conscious selection of states with comparatively lower numbers of children on the streets or at work to temper or test this assumption about abuse rates among such children.
Blocks within districts were identified based on a comparison of literacy quartiles – one block was selected from the upper quartile, and the second from the lower quartile. Fifty children were selected from each block. The report gives no explanation for looking at this particular number of children. So we are left in the dark as to why 50, and not any other number of children, were selected per block.
The child respondents were divided into five so-called “evidence groups”: children in family environment not attending school; children in schools; children in institutional care; working children; and street children. But many of the children in schools were also children with families. Similarly, for working children and street children. Some of the street and working children may also have been attending school. The report does not clarify this aspect. So the division into evidence groups also seems to have been arbitrary.
The report claims that the selection of child respondents for each evidence group was “as representative as possible”. But there is nothing in the sampling methods to show how, and of what, each evidence group was representative.
Children in institutions are said to have been identified on the basis of government records “and with the help of NGOs”. No explanation is given for these categories of selectors or these methods of selection. Schools are said to have been selected through purposive sampling, a non-probability sampling method that is, as explained above, not used to make generalised claims about an entire population. Child respondents for the evidence group “children in family environment not attending school” are stated in the report to have been selected by “quota sampling”, again a non-probability sampling method. Quota sampling involves sub-division of the population under study into mutually exclusive sub-groups, and a selection of candidates in proportion to certain chosen characteristics (for example, religion) in the population. But no such steps are reported here.
The report also surveyed “young adults” of the ages 18 to 24 years about their childhood and took the views of so-called “stakeholders” on child abuse. Again, the choice of respondents in these categories was arbitrary, both as to number and type. Apart from their age, the only description of the Young Adults is that they “were engaged in work in the government and private sector, agricultural sector, business etc.” The stakeholders selected are listed as people that “held positions in government departments, private service, urban and rural local bodies and individuals from the community”.
ARBITRARY VARIATION IN SIZE OF DIFFERENT CLASSES OF CHILD RESPONDENTS
The number of child respondents varied arbitrarily from one evidence group to the other. The evidence groups are said to be of “fairly equitable sample size”, even though, in fact, the variation between some of these so-called “equivalent” categories was as high as 40%.
The sample size of children in the age group of five to 12 years is stated to have been consistently higher than in the other two age groups, being 12-15 years and 15-18 years, but the report fails to state how this variation was accounted for in the final analysis.
LACK OF INDEPENDENCE, OBJECTIVITY AND RELEVANT EXPERIENCE OF PERSONS CONDUCTING THE SURVEY
Independence and relevant experience seem to have formed no part of the data gathering and collation for the report. The only qualification that seems to have been required for people carrying out the survey was that they be “graduates and post graduates in the field of social sciences”. So it is quite likely that most of the surveyors were fresh out of university, and without any experience raising or being with children.
Not only were the surveyors possibly inexperienced in interacting with children, there appears to have been no controls in place to account for the limitations in the child respondents’ capacities for understanding questions and articulating responses to them.
The surveyors were instructed not to fill in the “Information Schedule” for children in front of them, but to only take down notes while engaging children in “friendly dialogue” through group discussions and “one-to-one interactions”. After the sessions with the children for the day were over, the interviewers were to feed data into the Information Schedule, based on notes taken. So the responses attributed to children in the report are based on a two-fold interpretation by the surveyor of what the children said, first as interpreted in the notes taken during the interactions with the child, and second, when filling out the Information Schedule.
Also, the Information Schedule was designed in question-answer format. So responses attributed to children interviewed were reported by the surveyors in the form of answers to specific and direct questions as listed in the Information Schedule. In other words, the data attributes answers to children, even though the interviewer did not ask any direct questions, but extracted, and hence subjectively interpreted, the answers from an apparently free-wheeling “dialogue” with the child — part of the so-called dialogue having taken place in a group of several children, with the distractions and confusion typically attendant on any group activity with small children. We are thus looking at a data collection exercise that involved three layers of subjective interpretation by a possibly inexperienced surveyor, and no reported controls to bring objectivity into the data gathering exercise.
The agencies, sub-agencies and national level committee formed to oversee and co-ordinate data collection at each level, were all appointed by the government itself, in all likelihood acting on the advice of the NGOs who partnered it in preparing the report. So there was no attempt at ensuring independence in the data gathering either.
THE STRANGE CASE OF GOA
In the state-wise analysis of severe sexual abuse of children, the report says the lowest rate was recorded in Goa. The report says that this does not square with the common understanding of the situation in Goa.
We are unable to understand this statement – what is the “common understanding” of Goa to which the report refers? We are aware that there is a general idea that Goa has a problem with substance abuse by the young and attracts paedophile sex tourism. Is the report endorsing this understanding of Goa? If it is, then why did it not make further and better enquiries, particularly since this was a report about child abuse? If the report does not endorse this understanding of the situation in Goa, then why does it make a reference to it?
The report also says that the number of respondents included in the survey from Goa was much smaller than from other states. This should have prompted some redesign of the research, or at least exclusion of the data from Goa. But the report includes the Goa data without any adjustment, merely stating that: “data collection in Goa began late and there were difficulties in the process.”
ADMITTED DEFECTS IN DATA GATHERING
Some of the defects in data gathering are noted in the report itself. At the end of the chapter on Research Methodology it is stated that the data had “impurities”; that the authors of the report were not able to maintain a uniform standard of data collection and “quality control”; and that they were unable to do “corroborative analysis”.
But surely this apparent disclaimer is rather disingenuous. Either the data gathered is reliable, or it is not. If there are defects in the data gathering, then the report has to state how the analysis has been adjusted to mitigate these defects.
To be continued