Senin, 20 Desember 2010

Linguistics: Lexically Driven Error Detection and Correction

CORNELIA TSCHICHOLD
Institut d'anglais
Université de Neuchâtel

Abstract:
Recent progress in multimedia technology used in CALL has clearly been more impressive than progress in error detection capability. In order to overcome the obstacles in error detection needed for intelligent feedback in CALL, this paper calls for a new focus on lexical items, both single words and multiword units of various types. Single and multiword lexemes should not only be explicitly taught in CALL, but could also provide the key to more effective feedback on the language production by learners.

KEYWORDS

Lexicon, Multiword Units, Feedback, Error Detection, ICALL

I. INTRODUCTION

In the field of CALL, programs have developed from being mostly very text-based, drill-type exercises to colorful multimedia presentations of authentic language situations. Technically at least, it is no major problem today to have thousands of sound files and long video sequences in a CALL program. The pedagogic developments in CALL and intelligent CALL (ICALL) have clearly been less spectacular. While early CALL was strongly influenced by behavioristic principles on language learning and consisted mostly of repetitive, form-focused pattern drills, the rise of multimedia computer technology coincided with a new emphasis on so-called communicative language teaching (Levy, 1997). In an attempt to offer more realistic language material, videos of conversations superseded the earlier written texts. In this way, users could be offered authentic language input in abundance, alongside some relatively simple exercises, usually of the multiple-choice type, to practice their new knowledge.

Academic ICALL projects have often laid somewhat less emphasis on the progress in multimedia technology than commercial CALL programs but have instead tried to take advantage of the less spectacular, but nevertheless impressive advances in computational linguistics and natural language processing (NLP) with a view towards analyzing the program user's linguistic production. This endeavor—to analyze learners' output and thus to be able to provide them with feedback on their language production in ways similar to those of a teacher

549

correcting a student's essay—essentially turned out to be a larger bite than could be chewed in the short term. While the field of ICALL has produced some very impressive programs (e.g., see the projects described in Holland, Kaplan, & Sams, 1995), on the whole it has not been successful from a commercial perspective.1 The typical CALL program on the market today still relies on multiple-choice tasks and gives practically no feedback on the language produced by learners when that production exceeds single words, and even this feedback is sometimes disappointingly and unnecessarily simplistic.2

Web-based CALL, with its more restricted possibilities for feedback, usually relies on the users to compare their own answers to model answers that are provided on request. Error detection and correction in web-based CALL is thus normally the job of the learners themselves. The lack of adequate feedback, both in web-based and CD-ROM-based material, must therefore be seen as one of the crucial deficiencies of today's CALL programs.

In this paper, I will first consider the role of feedback in CALL, then discuss the problems involved in implementing feedback in a CALL program, before looking at developments in linguistics that could provide a solution, allowing us to overcome the obstacles and eventually develop better CALL systems.

2. THE IMPORTANCE OF FEEDBACK IN CALL

Feedback that consists of the percentage of users' correct or incorrect answers to multiple-choice questions is helpful only to a very limited extent because activities such as choosing one of the answers in a multiple-choice task can hardly be considered true foreign language output. In order for CALL programs to be truly helpful, they need to be able to react in a more meaningful way to texts produced by students. Research in second language acquisition theory stresses the importance of linguistic output for progress in the language learning process to occur.3 To date, the lack of output opportunities for learners and insufficiency of feedback on their output must be considered one of the major drawbacks of CALL. As Chapelle (1997) points out, mere mouse clicks do not constitute useful linguistic output.

Needless to say, CALL programs can also be used simply to present foreign language material to students, in the way printed materials or conventional audiovisual materials do. But such an application does not make full use of the computer's interactive potential. Ideally, a CALL program should offer students guidance in the learning process, take over some of the administrative work involved in language learning (e.g., which words and which grammatical structures have been learned, which ones need to be revised, and which communicative functions still need practicing), and, last—but certainly not least—correct at least some of the errors students are bound to make. Keeping track of student performance on the one hand and error diagnosis on the other thus become key elements in such a system since these two elements form the basis for meaningful feedback and customized exercises.

One of the reasons for the lack of intelligent feedback on learners' language

550

production is the fact that there are today no viable and comprehensive computational grammars capable of dealing with the error types found in learner language. The computational grammars that are used (more or less) successfully in current NLP applications, such as machine translation, are relatively robust grammars constructed on the underlying assumption that the text they have to deal with is correct or, at least, that it is not the grammar's task to correct the text. Such grammars are thus inherently unsuitable for error detection and correction. Since learners' errors come in such variety (and sometimes such quantity), it is not easy to write such a grammar either. Teachers and other human beings typically rely on a large stock of linguistic and nonlinguistic knowledge when mentally or explicitly correcting learners' utterances. Experienced teachers often have the additional resource of being able to contrast the learner's first language and the target language on several linguistic levels. This type of knowledge, covering a number of important aspects of language, semantics, pragmatics, and contrastive linguistics among them, has so far not been formalized to a sufficient extent to be implemented in a comprehensive NLP system.

Due to this lack of highly formalized linguistic knowledge, what is left for computers to do, at least at the moment, is to focus on low-level errors of spelling, morphology, and certain parts of syntax. The robust computational grammars used for this purpose often produce superficial and incomplete analyses that are then supplemented by a number of error detection strategies. Similar grammars can be found in some of the grammar checkers integrated in today's word-processing software. The intended users of such software are native speakers who have made a minor mistake while typing but who otherwise have a good command of the written language and enough linguistic intuition to critically evaluate the grammar checker's response to their writing. Some (noncommercial) software aimed at language learners use similar methods and are supplemented by some strategies that target typical interference and other types of learner errors (articles in Schulze, Hamel, & Thompson, 1999; Vandeventer, 2001; Vandeventer & Ndiaye, 2002). The usefulness of such grammar checkers for CALL purposes can be doubted, however, because of the perpetual danger of language learners being confused by superfluous or even erroneous messages (Dagneaux, Denness, & Granger, 1998; Tschichold, 1999).

Given this state of affairs, what possibilities are there for improving error detection in learners' texts, and which research areas could supply such possibilities? I want to propose a change of focus, away from what has traditionally been called grammar, and towards the lexicon as a starting point of the analysis of learner language.

3. ERROR ANALYSIS

For traditional studies in error analysis, lexical errors were clearly not at the center of attention. Instead, the category of lexical errors often tended to be a label for residual errors that could not be categorized in any other way.4 But the sheer number of errors that are in some way related to the words used by nonnative

551

language users is too high to be neglected. Cutting (2000) shows that vocabulary is by far the most important source of error for nonnative students at a British university, with grammar being only marginally more problematic than it is for native speaker students. According to the review of the literature in James (1998), lexical errors in general are among the most frequent types of errors made by learners. Certain categories in his typology of lexical errors could be dealt with very easily in a CALL setting—misformations5 and distortions6—since these two processes do not result in existing words. The products of such lexical errors are not likely to be listed in the electronic dictionary and can thus be normally detected by a simple spell checker.

A more important category of lexical errors are those which can be grouped under the label of semantic or word choice errors in the widest sense. For such errors, even advanced NLP techniques are of little help,7 so that the only help available via a CALL program today is the support of vocabulary acquisition in explicit or implicit form, a challenge which is still waiting to be taken up by CALL developers.

In the area lying between such orthographic and formal errors described earlier and the semantic errors just mentioned, we find a large group of errors at the interface of lexis and grammar: for example, cases where an -ing-form follows an English verb requiring an infinitival complement, a wrong preposition introducing the verbal complement, or the choice of the wrong weak verb in a collocation (Lennon, 1996). This is an area where a focus on the lexicon could eventually lead to marked improvements in CALL. Such progress cannot be had quickly, however, since the computational lexicon needs to be richly encoded before it can be put to use in a grammar checker, and this encoding work still needs to be done.

Apart from their frequency, the gravity of lexical errors is also a good argument for placing such errors high on the priority list of CALL developers. James (1998) posits two hierarchies of errors where lexical errors are judged to be the most severe type, compared to errors of orthography, word order, and verb forms. Teachers and native speaker laypersons differed only in the order of error gravity for nonlexical errors. Both groups ranked lexical errors highest, that is, as being the biggest hindrance to communication. Even in explicitly form-focused instruction, communication in the foreign language can be assumed to be the ultimate goal of the learning process, so errors that threaten to make communication impossible should receive a high degree of attention. Both the frequency and the gravity of lexical errors should therefore make them the highest priority in any ICALL program. This seems particularly obvious for a language like English, with its very large vocabulary and relatively simple morphology.

4. CORPUS LINGUISTICS

Arguably, the most important development within linguistics in recent years concerns findings from corpus linguistics. Corpora have always been used to a certain extent in linguistic and lexicographic research, but only with the advent

552

of inexpensive computers and large machine-readable databases of texts have corpora become a major, easily accessible resource for linguists. Corpora, even untagged corpora, lend themselves very well to lexicographic studies, whereas grammatical structures are more difficult to study and often require a tagged and parsed corpus. Many studies in corpus linguistics therefore center on individual lexical items and their typical behavior in a text type. Rather than concentrating on the grammaticality or ungrammaticality of particular combinations of words, corpus linguistic studies aim to find the typical, preferred usage and distinguish that usage from the highly improbable. One of the first truly significant findings from corpus linguistics was the sheer quantity and pervasiveness of multiword lexemes and prefabricated stretches of linguistic material that can be found in any text (Sinclair, 1991). This insight is now beginning to gradually influence teaching syllabi in EFL/ESL (Lewis, 1993; Nation, 2001; Nattinger & DeCarrico, 1992). The main arguments for teaching multiword lexemes is the fact that knowing such lexical chunks will help learners produce language fluently and thereby approach a native speaker's use of the language.

Corpora of learner language are more recent, but they do exist as well. A number of articles on the first major comparative, corpus-based project of English as a learner language can be found in Granger (1998). One of the interesting findings from research comparing learner English to native-speaker English is the differing frequency of lexical items. Granger and Tribble (1998) show that relatively general words (e.g., different, important) are often overused by learners when compared to native speakers of the same age group.

According to Hunston (2002), findings from such studies on learner corpora are not easily transferred into the language-teaching context, however. Especially overuse of certain items (so-called 'teddy bear words') cannot be corrected simply by advising learners to use those items less frequently. Ringbom (1998, p. 50) concludes that "[t]he limited vocabulary that advanced learners have in comparison with [native speakers] is a main reason for the general impression of learner language as dull, repetitive and unimaginative." Learners need to be offered alternative words or expressions and given the chance to practice them actively, a task which can be supported by CALL quite easily because thesauri are readily available and can be linked to vocabulary exercises. Learner corpus studies should also eventually produce more reliable statistics on error distribution, thus allowing CALL developers to better customize their products.

5. THE LEXICAL TURN IN LINGUISTICS

After a period when linguistics was generally dominated by an emphasis on syntax and on the language user's knowledge of rules, the lexicon has now come back into focus, partly under the influence of corpus linguistics. An increasing number of publications on the role of the lexicon both in first and second language studies document this trend (e.g., Levelt, 1989; Sinclair, 1991; Lewis, 1993; Carter, 1998; Singleton, 1999; Schmitt, 2000; Nation, 2001).

553

Levelt's (1989) lexical hypothesis posits that speaking is essentially lexicon-driven. Any grammatical form can only be activated once a lexical item has been chosen (via its meaning) by the speaker. In such a model, where meaning and the lexical items used to express the meaning take precedence over syntactic patterns in the production of utterances, lexical items obviously need to have a rich internal structure, "at least, the meaning of each item and its syntactic, morphological, and phonological properties" (Levelt, 1989, p. 232). For CALL, such an approach would imply a greater emphasis on teaching vocabulary and on teaching words in a richer form than is usually done today. It means that simply giving translational equivalents is not enough to learn and practice a word but that all lexical items need to be taught and practiced in a context which makes clear in which grammatical pattern(s) the item is normally used and what its typical lexical contexts are.

Lewis' (1993) lexical approach to language teaching calls for such a change of focus, away from lexicalized grammar, concentrating instead on heavily contextualized, grammaticalized lexis. Explicit teaching of grammar rules is replaced by pointing out patterns, centered around groups of lexical items that behave similarly. Some of the exercises he proposes cover typical phraseological types, such as collocations. Nation (2001) similarly focuses on vocabulary and the many different types of knowledge language users need for each word. The scale of the language teaching and learning task becomes clear when he states that "to read with minimal disturbance from unknown vocabulary, language users probably need a vocabulary of 15,000 to 20,000 words." (p. 20) Clearly, not all of these words need to be known actively as well, but it does explain why learners typically buy and use a bilingual dictionary much more often than a grammar of the foreign language. This is a further reason CALL developers should be interested in vocabulary: language learners are, too. Most language learners understand the need to learn vocabulary (in an explicit way rather than incidentally because the explicit way is more efficient), and many of them routinely carry dictionaries and booklets or collections of filing cards listing words they want to learn. Today, the support that is offered to them for this task from CALL programs is far from optimal, especially where collocations and other multiword units are concerned. The typical vocabulary-building exercise presents the item in isolation and asks for its translational equivalent. Collocations and other phraseologisms rarely appear and are practically never taught (Nesselhauf & Tschichold, 2002).

In a paper on vocabulary acquisition in CALL, Ellis (1995) points out that the insights from research on vocabulary learning have been largely ignored by CALL program designers. Programs that have come onto the market over the last few years fare no better in this respect. Especially the spacing effect—the phenomenon that words are best learned by repetition with longer and longer delays before a lexical item comes up for revision—is rarely implemented in CALL, despite the fact that computers are ideally suited for spacing repetitions of words in such a way that learning can be maximally efficient. Thanks to

554

virtually limitless storage capacity, computers are also the ideal medium to display a word in many different contexts during this process of repetition. It is therefore not easy to understand why CALL developers have not all taken up these ideas and produced some good vocabulary building software.

The large amount of information on individual words, the sheer quantity of both simple and complex lexical items to be learned before authentic texts can be read without difficulties, and the achievement of reasonable fluency in language production are major obstacles on the learner's way to success. CALL programs could fruitfully exploit both of these aspects of language learning. Compared with paper dictionaries, computers can store more information and, even more important in this context, connect different "entries" in several ways and display only certain aspects to users, depending on the stage of learning.

CALL developers can also exploit the interactive potential of computers by introducing exercises in which the words that are being learned are produced in different contexts. Such language production by learners could then be checked quite easily without (much) parsing.

6. LEXICALLY DRIVEN CALL

Research in error analysis, corpus linguistics, and phraseology shows that the lexicon deserves more attention from linguists, language teachers, and CALL practitioners. Hunston (2002, p. 135) hints at the potential for CALL and ICALL: "There is … enormous potential for growth in this area, as observations regarding the phraseology and use of individual words could be made available to writers in the form of lexically sensitive grammar-checkers, on-line thesauruses and the like." Granger and Tribble (1998) give the example of the adjective important which is used considerably more frequently by learners than by native speakers of English. They suggest an exercise based on a concordance of learner data, where the word important has been replaced by a blank. Students are asked to fill in one of the alternative adjectives offered (e.g., critical, crucial, major, serious, significant, or vital). This type of exercise could easily be adapted for use in a CALL program and, if placed towards the end of a phase teaching those adjectives, could provide a useful step towards consolidating the knowledge of this range of adjectives. If a thesaurus could be dynamically adapted to the learner's level in such a way that it only proposes words that are comprehensible to the learner, it could be used as one component of a CALL system focusing on the learner's lexical competence.

In order to give CALL users intelligent feedback on their language production, such a lexically driven system needs a component that can analyze or parse sentences and sentence fragments. No parser at present is able to handle highly erroneous language to a degree that could make it useful for ICALL systems, but so-called chunk parsing (Abney, 1991) could provide at least a partial solution to this problem. A chunk parser assembles continuous stretches of sentences into chunks. Typical chunks are simple noun phrases, prepositional phrases, and verb forms. The grammar needed to analyze such chunks is quite

555

simple because most of the difficult decisions are left to a later stage of analysis. Consequently, prepositional phrases, for example, are normally left unattached, and verbal chunks do not include the verbal complements (with the exception of pronominal objects). When dealing with erroneous learner language, this confinement represents a great advantage for chunk parsing over traditional full parsing. Since the internal structure of chunks is quite simple, we can assume that learners soon master these structures. "By contrast, the relationships between chunks are mediated more by lexical selection than by rigid templates" (Abney, 1991, p. 256). Those relationships are thus presumably more difficult and time consuming to learn and will vary more strongly from language to language. They are therefore likely to give rise to errors in the learner's language. Since these errors depend to a large extent on the lexical items found in the chunks, error analysis needs to be centered on lexical items.

If we now imagine a grammar checker that analyzes texts in two stages first by parsing sentences into chunks (dealing with more local errors) and then proceeding to a second, lexically driven analysis, the next obvious question concerns the type of information such a grammar checker would need for each lexical item. For the stage of chunk parsing, the typical amount and type of information found in a dictionary entry is probably sufficient. For the second, lexically driven stage of analysis, however, each lexical item needs to be annotated with a kind of local grammar and its typical lexical contexts, therefore requiring clearly more data than usually found in dictionaries, either electronic or paper. Because much of the research for such an enriched computational lexicon still needs to be done and suitable formats for encoding the results first need to be found and implemented, producers of grammar checkers might think such an enterprise to be rather utopian. In CALL, however, we need not necessarily start off with a grammar checker for native speakers (who know several tens of thousands of words) or even advanced learners of English (who could have a vocabulary of well over five thousand words), but we can envisage a foreign language grammar checker covering two or three thousand words only and still being a very useful tool to a great many learners.

7. LEXICALLY DRIVEN ICALL

Users of a lexically driven ICALL system could start their use of the program by typing in an essay of a specified length. A method of the kind described in Goodfellow, Lamy, and Jones (2001) could then be used to assess the learners' vocabulary range (i.e., their quantitative lexical knowledge) and propose a summative automatic feedback to their essays. On the basis of this first impression, the learners could be directed to a certain entry level for vocabulary building. When they practice their new vocabulary, a lexically driven grammar checker, as described above, could then chunk-parse the sentences or phrases given as answers to the exercises and propose corrections as necessary. Lexical items that are problematic for learners could be marked in the stock of vocabulary being learned and subsequently presented in varying contexts during the

556

following learning sessions. Sample exercises can be found in Lewis (1993) and Nesselhauf and Tschichold (2002). For each learning session, the ICALL system would generate lists of lexical items to be practiced, taking account of the spacing effect and other important findings, and the learners' past performance and learning aims, and guide them through the session, collecting information needed for the next tailor-made session. Exercises calling for answers in the form of short phrases or single sentences would be interspersed with the occasional essay question on specified topics, where learners have to focus on the content but need to use recently acquired vocabulary items. Such relatively restricted texts could then be analyzed by the system's grammar checker, thus making intelligent feedback possible and providing true help to language learners.

8. CONCLUDING REMARKS

For the field of CALL, the recent rise in interest in the lexicon in theoretical, applied, and computational linguistics should be seen as a welcome development and a challenge. Our growing knowledge about words and typical word combinations, together with findings from research on learner corpora, should eventually allow developers of CALL programs to go beyond the limitations seen in today's commercial CALL software by starting the analysis with words and their contexts. This strategy implies a more local approach first, but the long-term perspective promises to be considerably better than what we see currenty in CALL. However, such an approach cannot be implemented quickly; more work is needed on the patterns of lexical items and especially on the types of errors made by language learners in relation to those patterns.

NOTES

1 I do not wish to claim that CALL has been a commercial success for publishers—I merely want to make the point that ICALL programs have not made it into the commercial sector.

2 For some examples, see Nesselhauf and Tschichold (2002).

3 For types of learner output possible in CALL, see Chapelle (1998).

4 Dagneaux, Denness, and Granger (1998) point out that error classification was often flawed, leading to nonreplicable results of error analysis studies.

5 James (1998) gives coinages such as *massacrate and calques such as baby car as examples.

6 Among the examples listed in James (1998), we find *littel, *deepth, *intresting.

7 There are a handful of possible exceptions to this kind of error. If a (female) German learner writes about my man, one could consider proposing my husband instead. Compared to the learner's total vocabulary in the foreign language, such potential errors comprise only a very small fraction, however.

557

REFERENCES

Abney, S. P. (1991). Parsing by chunks. In R. C. Berwick, S. P. Abney, & C. Tenny (Eds.), Principle-based parsing: Computation and psycholinguistics (pp. 257-278). Dordrecht: Kluwer.

Carter, R. (1998). Vocabulary: Applied linguistic perspectives. London: Routledge.

Cutting, J. (2000). Written errors of international students and English native speaker students. In G. M. Blue (Ed.), Assessing English for academic purposes (pp. 97-113). Bern: Peter Lang.

Chapelle, C. (1997). CALL in the Year 2000: Still in search of research paradigms? Language Learning & Technology [online journal],1 (1), 19-43. Available: llt.msu. edu

Chapelle, C. (1998). Multimedia CALL: Lessons to be learned from research on instructed SLA. Language Learning & Technology [online journal], 2 (1), 22-34. Available: llt.msu.edu

Dagneaux, E., Denness, S., & Granger, S. (1998). Computer-aided error analysis. System, 26, 163-174.

Ellis, N. C. (1995). The psychology of foreign language vocabulary acquisition: Implications for CALL. Computer Assisted Language Learning, 8 (2-3), 103-128.

Goodfellow, R., Lamy, M.-N., & Jones, G. (2001). Assessing learners' writing using lexical frequency. ReCALL, 14 (1), 133-145.

Granger, S. (Ed.). (1998). Learner English on computer. London: Longman.

Granger, S., & Tribble, C. (1998). Learner corpus data in the foreign language classroom: Form-focused instruction and data-driven learning. In S. Granger (Ed.), Learner English on computer (pp. 199-209). London: Longman.

Holland, V. M., Kaplan, J. D., & Sams, M. R. (1995). Intelligent language tutors: Theory shaping technology. Mahwah, NJ: Lawrence Erlbaum.

Hunston, S. (2002). Corpora in applied linguistics. Cambridge: Cambridge University Press.

James, C. (1998). Errors in language learning and use. London: Longman.

Lennon, P. (1996). Getting 'easy' verbs wrong at the advanced level. IRAL, 34 (1), 23-36.

Levelt, W. J. M. (1989). Speaking: From intention to articulation. Cambridge, MA: MIT Press.

Levy, M. (1997). Computer-assisted language learning: Concept and conceptualization. Oxford: Clarendon Press.

Lewis, M. (1993). The lexical approach: The state of ELT and a way forward. Hove, UK: LTP.

Nation, I. S. P. (2001). Learning vocabulary in another language. Cambridge: Cambridge University Press.

Nattinger, J.-R., & DeCarrico, J. S. (1992). Lexical phrases and language teaching. Oxford: Oxford University Press.

558

Nesselhauf, N., & Tschichold, C. (2002). Collocations in CALL: An investigation of vocabulary-building software for EFL. Computer Assisted Language Learning, 15 (3), 251-280.

Ringbom, H. (1998). Vocabulary frequencies in advanced learner English: A cross-linguistic approach. In S. Granger (Ed.), Learner English on computer (pp. 41-52). London: Longman.

Schmitt, N. (2000). Vocabulary in language teaching. Cambridge: Cambridge University Press.

Schulze, M., Hamel, M.-J., & Thompson, J. (1999). Language processing in CALL. ReCALL, 11.

Sinclair, J. (1991). Corpus, concordance, collocation. Oxford: Oxford University Press.

Singleton, D. (1999). Exploring the second language mental lexicon. Cambridge: Cambridge University Press.

Tschichold, C. (1999). Grammar checking for CALL: Strategies for improving foreign language grammar checkers. In K. Cameron (Ed.), CALL: Media, design & applications (pp. 203-222). Lisse: Swets & Zeitlinger.

Vandeventer, A. (2001). Creating a grammar checker for CALL by constraint relaxation: A feasibility study. ReCALL, 13 (1), 110-120.

Vandeventer, A., & Ndiaye, M. (2002). A spell checker tailored to language learners. In J. Colpaert, W. Decoo, M. Simons, & S. Van Bueren (Eds.), CALL professionals and the future of CALL research: Proceedings of CALL 2002 (pp. 315-329). Antwerp: University of Antwerp.

AUTHOR'S BIODATA

Cornelia Tschichold worked and published on grammar checking for learners of English before writing her Ph.D. in the field of computational lexicography and phraseology. She is now working on a book dealing with the role of the lexicon in CALL and teaches English linguistics at the University of Neuchâtel in Switzerland. When flooded with her students' essays, she sometimes dreams about the perfect grammar checker.

AUTHOR'S ADDRESS

Cornelia Tschichold

Institut d'anglais

Université de Neuchâtel

Espace Louis Agassiz 1

CH-2001 Neuchâtel

Switzerland

Phone: +41 32 718 18 62

Fax: +41 32 718 17 01

Email: Cornelia.Tschichold@unine.ch

Retrieved on December 20, 2010 from https://www.calico.org/memberBrowse.php?action=article&id=294

Tidak ada komentar:

Posting Komentar