Monday, November 29, 2010

Onkyo Receiver Subwoofer Problem Turns Off

much for our chaste ears

few months ago, the Evaluation Institute published general diagnostic evaluation 2009. Primary Education. Fourth year (eye, 14 Mb) last step of a long process that was intended to assess competencies in Language, Mathematics, Physical World, and a fourth, called social and civic competence. Is an assessment with a large sample, as it seeks to obtain significant results for all the Autonomous Communities. In total, 28,708 students were assessed from 887 centers, and passed out questionnaires to 1,341 teachers, 25,741 families and 874 directors.
Finally, a study that required a planning and organizing important, technical reliability backed by a prestigious scientific advice, etc. Unfortunately, the final report is not up to the company: a technically very bad report , in which data are missed so obvious, conclusions are drawn more than doubtful and left in the air many questions not only relevant, but which correspond to pressing problems of our education system. I'm not the only says, but I still think we are too few people say and we should be more. This is compounded because so far (at this point is new, I will tell you) the database is not made public by simple political kidnapping.
One of the rare points of the report was that it was about the relationship between computer use in the center and the results education. This is a topic of interest because the Ministry in collaboration with the CCAA (all except Madrid and Valencia) is implemented in this course an expensive School 2.0 program in which, for political reasons, is providing the infrastructure facilities for communications and computers, and is giving a "netbook" to thousands of students. Without a single technical or pedagogical reason to support it, without a previous study, without consulting other studies relating to countries that have implemented similar measures, and without learning from the mistakes that have already made some regions. Come on zero evidence. But as in this country seems to work for a politician to spend, but do not ask a priori justifications not post accounts, as well it goes. One would expect to find any evidence in the evaluation report, but we find this text (p. 158):
results based on computer use at school and Internet use
The computer and Internet access for multiple purposes have become an element of leisure and not only among teenagers but also at younger ages. Like many other aspects of child and youth entertainment, it can have negative effects positive, according to the use made of it. Also schools have incorporated computer use at school and Internet use as learning tools. Therefore interested in investigating how these media facilitate the acquisition of basic skills in students. In general
Diagnostic Assessment 2009 included several questions on the questionnaire of context for the students on the use of information technologies and communication. From students' responses, we studied the relationship between frequency of use and familiarity of students with information technology and communication and results in the four skills assessed.
From the students' responses show that only 4% of students say they use the computer at school every day, these students have an average ISEC very low (-0.34), a higher average age than the sample survey expectations of below average and a later start in his first schooling.
The results of the other students are not significantly different, whether it is students who used a moderate frequency of information technologies and communication, as if he believes those who make more frequent use. Nor do the data analysis nothing conclusive can be said about how it affects the the results more or less frequent use of the Internet. Therefore, should further analysis to allow better assess their significance. It is also necessary to clarify the questions posed to students on this topic for a more consistent data that allow a more detailed analysis of the relationship between outcomes and the use and familiarity with these technologies.
Finally, it should be noted that the results of this first year of general diagnostic Assessment 2009 can not collect in any case the impact will the new plans for strengthening the use and familiarity with computers and internet in education as learning tools.
Therefore, it will be necessary to carry out this analysis in subsequent applications of the general diagnostic evaluations, once enriched the tools for collecting information. They shall also apply analytical techniques to obtain more accurate and meaningful results.
This is what the report says on the subject, a report that from now on call "politically correct." As we shall see, never better. As the report is politically correct to say that although there are significant differences between those who use computers every day in the center and those who do not, few (a 4% of the sample are approximately 1,150 students, there are so few, even if, as is probably the case, found in 15 or 20 centers. Moreover, these students come from poorer backgrounds, as their socioeconomic and cultural index medium (ISEC) is low, -0.34 (but not too much, it is one third of standard deviation lower than the national average), there are more repeaters and their expectations for high levels of study are low. All this influences the results, of course, but not knowing the difference in speaker is difficult to know whether warranted or not.
The problem is that the report is B, but there was an A before being reviewed by the Autonomous Communities and so far has not seen the light. Until now. Is the report that call DCO (So much for our chaste ears), and decided to close we will not doubt the wise citizens of our political decisions. This is what the report says DCO in the same place as the politically correct:
results based on computer use at school and Internet use .
In this section, as in some of those that follow, there has been a factor analysis has permitted the classification variables of the questionnaires for students and families regarding the use and familiarity of students with information technology and communication. The variables thus constructed provide greater explanatory power of the results.
Students say they rarely or never use the computer (51%) in class have a higher score on the four powers who claim to use it every day or almost every day (5%). Use it once or twice a week to 44%. The differences range from 84 points in social and civic competence and 62 points in mathematical literacy, as shown in Figure 5.4.
Figure 5.4. The results according to frequency of computer use at school

Similarly relate the use of internet by students and their results in the general diagnostic evaluation 2009, although in this case the differences are minor, as shown in Figure 5.5. Students say they rarely or never use internet (27%) have a higher score on the four powers which claim to use the network every day or almost every day (36%). The differences range from 25 points in social and civic competence and 19 points in mathematical literacy.
These results of computer use in class and the Internet both at school and in the private sector are clearly outstanding and should be further analysis to allow better appreciation of the meaning of them.
Figure 5.5. Differences in the results by frequency of internet browsing

should be noted that when using the Internet aims to provide the student seeking information for their studies, students who make moderate use Internet (once or twice a week) are those with a higher score. Students who use the Internet to communicate with others (chat) every day gets a score lower than those who did not never used for this purpose, values \u200b\u200branging between 24 and 28 points in four races.
The DCO report says is that these results are "really remarkable." For between 84 and 62 points (the average deviation of 100), we can infer with some confidence that the results are both significantly and substantively noteworthy. ISEC says nothing of half of the students, probably because the net difference (ie, discounting the effects of ISEC) remains substantial. On this subject we can say more, according to the report politically correct (Table 4.1, pg. 128), the variation in performance associated with a unit of ISEC ranges from 33 to 37 points. It is therefore impossible for a variation of 0.34 point ISEC explain differences over 20 points, as is the case.
Can you then say that the pedagogical model based on daily use of computer at school is associated with a much lower performance of students? Can it be stated that the extension of the Escuela 2.0 will bring a reduction in the already poor performance of our students? No, look you. Both A and B report are technically very bad, and no one can say almost anything based on it. Another thing would be if you allow access to microdata society.
Why we can not say for sure? Because we do not know how are the centers and their quality: not only due the composition of their students, but the quality of the school and education authority to which it belongs (these two aspects are systematically ignored in too many analysis and, in my experience, have a significant effect). Nor do we know the number of repeaters in these facilities, nothing about their social composition, whether they are rural or not, if they decided to put computers in these facilities precisely because they were particularly trouble ... Now, is far more likely that the educational model of introduction of computers in classrooms that we use in Spain, as a points-based study of the PISA results for our country, have the effect of decreasing performance of students.
Now, what you can make sure the program is that School 2.0 is extremely unwise in its conception and implementation, and should be supported by any study or any previous experiment, or at least some small-scale test before applying to all students.
And, of course, that there has been a willingness to conceal relevant information to citizens about the functioning of the education system. Like so many other times, the despot who watches over us paternally considered unprepared to assume this kind of information. As so often, lying on this issue will be free. It has happened before. And so it goes.

Wednesday, November 24, 2010

What Does Solaraze Gel Do

CATHEDRAL Vicente Blasco Ibañez

New Entry to add comments.

Monday, November 22, 2010

Walmart Sanyo Battery

Calculate the failure

As some know, I spent some time working on the technical definitions of school failure and its calculation. Published a summary of my conclusions in an article Papers in Economics (fee), and I thought it would be the subject clear. Still, I came news that the Ministry were working on the idea that the failure was overestimated, since they thought that the student who does not take the title at his age can release a year later, which is true. True, but the problem is they do not understand what the indicator measures. Sometimes you can tell the truth and drop complete nonsense, as the film showed Welcome, Mr. Chance , simply because you talk of bacon when you ask for speed.
But let's start at the beginning. Spain has traditionally measured the percentage of students in a given course that promoted the following year. When he reached compulsory schooling to 14 years, back in the 70's, people began to talk of school failure referring to the last course is compulsory, then 8 of GBS: the introduction of the concept of compulsory education, it is automatically started measuring the percentage of students who did not get the minimum qualification required. At that time the students left the GBS with a title (which passed) or a certificate (non). The former could continue their studies in high school or FP I, second only to FP I, and it was assumed that education was compulsory until 16. It is a common misconception that those who went to high school approved and non-FP, because the truth is that half of those enrolled in FP I had half title and license.
Although this indicator was very comfortable going to measure how each center, and adding each province, each region and, ultimately, the country, the fact is that the idea was catching on that for territorial aggregations was sense to use another indicator, called the gross graduation rate, and gathered here what happened with the whole population, not only those enrolled this past year (on the one hand, there was a chronic absenteeism in certain areas, so the first indicator measured not ever, and, secondly, that last year accumulated many repeaters, which distorts the data.) This crude was to find the percentage of students who obtained the degree of total population in the theoretical age for obtaining such a title. What was missing up to 100 is what we call gross school failure. For example, in 1985, the gross rate would be: [students they got the title 8 of GBS during 1984-85] x 100 / [persons serving 14 years in 1985]. Gross failure would be 100 - [gross].
That is, both in the initial rate (direct tax) and second (gross), the numerator was the same, but changing the denominator. And if we talk about compulsory schooling for all, it made sense to include everyone in the equation. In fact, the crude was as usual in Europe.
Logse
When it came, the crude rate of failure was left temporarily to apply for technical reasons: to be a part of the students in a compulsory until 14 and a compulsory until 16, there was common denominator population, and re-use the old system of direct rate between 1991 and 1999. But, once implanted Logse in its entirety, in 2000, could re-calculate the gross rate of school failure, and so did the ministry since then. But there had been transfers to autonomous regions, and these did not give aware that it could return to the calculation of the gross. That year was the same, for both data rates were very similar, but since then look what happened:


is, the gross school failure, which included all population rose four points in seven years, while the outright failure decreased more than four at the same time. Finally, in the last year Logse the difference was more than eight points. The calculation of the Autonomous Communities had a measurement error around 30%. We, who with Pau Gassol that meter (2.15 m) would measure 1.54, and would have targeted the computer based on the Institute. So every time you published the school failure, community leaders came out worse off by saying that their data was different and much smaller. And there were journalists who published this information, and otherwise. Why would a politician to take action if it has poisoned himself and is believed to failure is less than it is? Well, of course, the failure continued to grow year after year.
But why these two estimates differ? My thesis is that due to the 25% rule. While in Spain was increasing the volume of student population, had to check that schools and teachers (who came from a more demanding) suspend not too many, and that system was to control / pressure for schools did not exceed too much 25% of failures in the last year. When implanted Logse maintain a room suspended in 4 of the ESO was untenable, and therefore students had accumulated in previous years (or was it natural: favored the system itself), first 3, then 2, 1, finally. Those who repeated within 4 º of ESO, very often, never reached the last grade they abandoned when they turn 16. And with that statistics were kept.
So much for history. Now for the technical question: why is not true that the gross rate system is not accurate because although a repeat student can take the title the following year. This argument has a somewhat curious conceptual trap: thinking that a student can get a title after repeated but not see that a student can take the title at age 17. That sounds absurd, but is the root of the argument.
Let's first graph the age distribution of students in 4 º ESO with another distribution of courses students 15 years (2008-09 academic year, but is similar in previous years):


words, 63% of students in 4 º ESO is 15, and 59% of pupils aged 15 are in 4 º ESO. On the one hand, it is true that 41% of students has not reached 4 and therefore has no chance of the owner yet, but the other 37% of students in 4 º ESO has over 15 years and yes it is willing to owner (in fact, older students, who are still there without leave, are more likely to head to those who are retarded, because many people never step into a classroom of 4 th of ESO-10% of each cohort, or not reach the finals).
Therefore, by dividing the 15 year olds are not leaving out students who can take the title after (since we include the previous cohort to take it out now), but divided by a homogeneous group students, the age cohort. Therefore, when the cohorts are homogeneous, the measure is very accurate. Furthermore, when we have an isolated, but a fairly stable range, as is the case, is far more reliable: to put in the denominator to all the people year after year, large variations of graduates produced peaks in the series, and then the measure of a particular year would be unreliable. But this does not occur .
But there is another measure that corroborates the gross failure: in recent years, the Ministry has included in the statistics the percentage of students coming out of the SCS with and without a title. At first they did also by networks, giving insight into the failure differences between public and private, but the results were quite shocking and ceased publication. The indicator is defined as such by the Ministry:

The distribution of students leaving the ESO as the result [...] is an alternative indicator to the gross rate of population that graduated from ESO This new indicator uses only data from the Statistics for the non-university education, Unlike the crude that also uses Population Projections. Defines "students coming out of the ESO in the year X" as the difference between "students enrolled in year X in the ESO 15 and older" and "students enrolled in year X +1 in the ESO 16 and older "(age from which outputs are produced this level). The "student who leaves without Certificate in School" is calculated as the difference between "students coming out of the ESO in the year X" and "graduate student at ESO in X years."
Come on, the indicator shows the percentage of dropouts untitled ESO on the number of students who leave the ESO with 15 years or more, and we call it net rate. It is a very similar gross school failure, except that it takes account of the absence before age 16 (that exist there: 2.2% in 2009, I'll tell you a funny story on this topic.) The latest data for this indicator for the 2007-08 academic year, found in Table D3.9 of this pdf .
This failure net in 2007 was 28.6%, slightly below the 30.7% of gross failure that year. If you look carefully the table by communities see that the net failure measures virtually the same as the gross (the variation is explained about 87% of the variation of the other or, in more technical terms, the R2 between the two variables is of 0.87), except that its mean is somewhat lower. Why? Because it ignores absenteeism in compulsory stages. This net rate is better in some respects, better captures the variations from year to year, and also allows use in most contexts. For example, to know the differences between public and private, or to measure the failure when varying the age of compulsory education. It is also more fair to the education system, which usually has no responsibility for absenteeism, as it has a strong social component.
As for having the patience to get here the same thing goes a bit lost, recapitulate: there is a direct failure of measurement error on the real failure of 30%, and that he greatly underestimated (which is a favorite regional administrations), a net failure is very accurate for those who are educated but that does not take into account the absence (2% of the population) and a gross failure to take into account the whole population, which is fairly accurate, but is used only in some cases.
Not bad the mess was measured. What is clear is that this data is that engulfs all of our educational indicators in the following stages and is the largest social segregation of our society.