As some know, I spent some time working on the technical definitions of school failure and its calculation. Published a summary of my conclusions in an article Papers in Economics (fee), and I thought it would be the subject clear. Still, I came news that the Ministry were working on the idea that the failure was overestimated, since they thought that the student who does not take the title at his age can release a year later, which is true. True, but the problem is they do not understand what the indicator measures. Sometimes you can tell the truth and drop complete nonsense, as the film showed Welcome, Mr. Chance , simply because you talk of bacon when you ask for speed.
But let's start at the beginning. Spain has traditionally measured the percentage of students in a given course that promoted the following year. When he reached compulsory schooling to 14 years, back in the 70's, people began to talk of school failure referring to the last course is compulsory, then 8 of GBS: the introduction of the concept of compulsory education, it is automatically started measuring the percentage of students who did not get the minimum qualification required. At that time the students left the GBS with a title (which passed) or a certificate (non). The former could continue their studies in high school or FP I, second only to FP I, and it was assumed that education was compulsory until 16. It is a common misconception that those who went to high school approved and non-FP, because the truth is that half of those enrolled in FP I had half title and license.
Although this indicator was very comfortable going to measure how each center, and adding each province, each region and, ultimately, the country, the fact is that the idea was catching on that for territorial aggregations was sense to use another indicator, called the gross graduation rate, and gathered here what happened with the whole population, not only those enrolled this past year (on the one hand, there was a chronic absenteeism in certain areas, so the first indicator measured not ever, and, secondly, that last year accumulated many repeaters, which distorts the data.) This crude was to find the percentage of students who obtained the degree of total population in the theoretical age for obtaining such a title. What was missing up to 100 is what we call gross school failure. For example, in 1985, the gross rate would be: [students they got the title 8 of GBS during 1984-85] x 100 / [persons serving 14 years in 1985]. Gross failure would be 100 - [gross].
That is, both in the initial rate (direct tax) and second (gross), the numerator was the same, but changing the denominator. And if we talk about compulsory schooling for all, it made sense to include everyone in the equation. In fact, the crude was as usual in Europe.
Logse When it came, the crude rate of failure was left temporarily to apply for technical reasons: to be a part of the students in a compulsory until 14 and a compulsory until 16, there was common denominator population, and re-use the old system of direct rate between 1991 and 1999. But, once implanted Logse in its entirety, in 2000, could re-calculate the gross rate of school failure, and so did the ministry since then. But there had been transfers to autonomous regions, and these did not give aware that it could return to the calculation of the gross. That year was the same, for both data rates were very similar, but since then look what happened:
is, the gross school failure, which included all population rose four points in seven years, while the outright failure decreased more than four at the same time. Finally, in the last year Logse the difference was more than eight points. The calculation of the Autonomous Communities had a measurement error around 30%. We, who with Pau Gassol that meter (2.15 m) would measure 1.54, and would have targeted the computer based on the Institute. So every time you published the school failure, community leaders came out worse off by saying that their data was different and much smaller. And there were journalists who published this information, and otherwise. Why would a politician to take action if it has poisoned himself and is believed to failure is less than it is? Well, of course, the failure continued to grow year after year.
But why these two estimates differ? My thesis is that due to the 25% rule. While in Spain was increasing the volume of student population, had to check that schools and teachers (who came from a more demanding) suspend not too many, and that system was to control / pressure for schools did not exceed too much 25% of failures in the last year. When implanted Logse maintain a room suspended in 4 of the ESO was untenable, and therefore students had accumulated in previous years (or was it natural: favored the system itself), first 3, then 2, 1, finally. Those who repeated within 4 º of ESO, very often, never reached the last grade they abandoned when they turn 16. And with that statistics were kept.
So much for history. Now for the technical question: why is not true that the gross rate system is not accurate because although a repeat student can take the title the following year. This argument has a somewhat curious conceptual trap: thinking that a student can get a title after repeated but not see that a student can take the title at age 17. That sounds absurd, but is the root of the argument.
Let's first graph the age distribution of students in 4 º ESO with another distribution of courses students 15 years (2008-09 academic year, but is similar in previous years):
words, 63% of students in 4 º ESO is 15, and 59% of pupils aged 15 are in 4 º ESO. On the one hand, it is true that 41% of students has not reached 4 and therefore has no chance of the owner yet, but the other 37% of students in 4 º ESO has over 15 years and yes it is willing to owner (in fact, older students, who are still there without leave, are more likely to head to those who are retarded, because many people never step into a classroom of 4 th of ESO-10% of each cohort, or not reach the finals).
Therefore, by dividing the 15 year olds are not leaving out students who can take the title after (since we include the previous cohort to take it out now), but divided by a homogeneous group students, the age cohort. Therefore, when the cohorts are homogeneous, the measure is very accurate. Furthermore, when we have an isolated, but a fairly stable range, as is the case, is far more reliable: to put in the denominator to all the people year after year, large variations of graduates produced peaks in the series, and then the measure of a particular year would be unreliable. But this does not occur .
But there is another measure that corroborates the gross failure: in recent years, the Ministry has included in the statistics the percentage of students coming out of the SCS with and without a title. At first they did also by networks, giving insight into the failure differences between public and private, but the results were quite shocking and ceased publication. The indicator is defined as such by the Ministry:
The distribution of students leaving the ESO as the result [...] is an alternative indicator to the gross rate of population that graduated from ESO This new indicator uses only data from the Statistics for the non-university education, Unlike the crude that also uses Population Projections. Defines "students coming out of the ESO in the year X" as the difference between "students enrolled in year X in the ESO 15 and older" and "students enrolled in year X +1 in the ESO 16 and older "(age from which outputs are produced this level). The "student who leaves without Certificate in School" is calculated as the difference between "students coming out of the ESO in the year X" and "graduate student at ESO in X years."
Come on, the indicator shows the percentage of dropouts untitled ESO on the number of students who leave the ESO with 15 years or more, and we call it net rate. It is a very similar gross school failure, except that it takes account of the absence before age 16 (that exist there: 2.2% in 2009, I'll tell you a funny story on this topic.) The latest data for this indicator for the 2007-08 academic year, found in Table D3.9 of this pdf .
This failure net in 2007 was 28.6%, slightly below the 30.7% of gross failure that year. If you look carefully the table by communities see that the net failure measures virtually the same as the gross (the variation is explained about 87% of the variation of the other or, in more technical terms, the R2 between the two variables is of 0.87), except that its mean is somewhat lower. Why? Because it ignores absenteeism in compulsory stages. This net rate is better in some respects, better captures the variations from year to year, and also allows use in most contexts. For example, to know the differences between public and private, or to measure the failure when varying the age of compulsory education. It is also more fair to the education system, which usually has no responsibility for absenteeism, as it has a strong social component.
As for having the patience to get here the same thing goes a bit lost, recapitulate: there is a direct failure of measurement error on the real failure of 30%, and that he greatly underestimated (which is a favorite regional administrations), a net failure is very accurate for those who are educated but that does not take into account the absence (2% of the population) and a gross failure to take into account the whole population, which is fairly accurate, but is used only in some cases.
Not bad the mess was measured. What is clear is that this data is that engulfs all of our educational indicators in the following stages and is the largest social segregation of our society.
0 comments:
Post a Comment