<< Chapter < Page
  Collection     Page 29 / 53
Chapter >> Page >

Obviously, this popular kind of oral test at TNU is far from being useful in measuring the students’ overall language oral proficiency, and can be said to be lacking in construct validity and reliability (See 2.5, Chapter 2).

Secondly, no specifications of particular test task(s), especially specified components of oral ability to be tested, areas of language knowledge adequate and a marking key, to some extent, results in the teachers’ or test designers’ inadequate and useless tests. It can be said that there is lack of consideration of communicative stress in the oral test construction.

As can be seen in four achievement speaking tests (See Appendices 1&2), all the test questions/tasks – topics- are never accompanied with any external prompts helping the students make a structured presentation, and any explicit instructions quantifying language knowledge and ability needed to perform the tasks.

It is extremely necessary for test writers to provide clear instructions helping test takers to organise a spoken presentation for test performance because students are always encouraged to produce effectively organised speech so that the listener finds it easy to catch up with what is being said (Brown&Yule, 1983, p.119).

Also, in order to write test tasks fitting students’ proficiency levels, test writers need really give explicit instructions quantifying language knowledge and ability. The quantification of performance on a particular task much depends on the grading of tasks according to cognitive difficulty (Brown&Yule, 1983, p.121). To put in another way, the same task type can be made easier or more difficult. For example, describing a room with 8 elements is apparently more difficult than a room with 5 elements. Inevitably, test designers or teachers of speaking skill should always bear in mind informed judgements of the degree of this cognitive difficulty or communicative stress (Figure 2.2, Chapter 2) during test operationalization process.

Besides, no official instructions on criteria for marking students’ test performance are presented; thus, the test writers/teachers are unaware of the importance of scoring method(s) for each test task, and they never design a marking key (See 2.4.3, Chapter 2) instructing assessors how to assess students’ performance on test tasks. As discussed in 2.4.3, in a marking key, language and skill categories are identified and awarded separate marks according to test purpose(s). As Underhill (1987, p.94) points out the aim of a marking key is ‘to save time and uncertainty by specifying in advance, as far as possible, how markers should approach the marking of each question or task’. With help of a marking key and a level scale mentioned above, assessors can mark a test more quickly and reliably, for each language or skill category is expected to be separately marked.

  • Test Administration Process

Table 4.1 and 4.3 indicate that TNU speaking test administration reveals many a shortcoming. These weak points include (1) lack of test administration standardisation, (2) lack of reliability in marking test takers’ test performance, and (3) lack of supportive testing environment.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Collection. OpenStax CNX. Dec 22, 2010 Download for free at http://cnx.org/content/col11259/1.7
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Collection' conversation and receive update notifications?

Ask