Free Essay

Retrieving Similar Buggy Prolog Programs Using Medd's Student Model

In: Computers and Technology

Submitted By maylanigonzales
Words 17544
Pages 71
Chapter 1 INTRODUCTION 1.1 Background of the Study

It has been said that modeling the learner is the central aspect of intelligent tutoring systems. This realization spurred the development of student modeling systems or systems that diagnose student errors. These systems proved to be effective in areas like mathematics (subtraction, highschool algebra, differentiation) and computer programming (Pascal, Lisp,C++).

The essential elements in constructing a student model are the background knowledge and the student behavior. The first component which is the background knowledge is difficult to acquire automatically and to extend, and in fact one of the major bottlenecks in the development of student modeling systems. The difficulty lies in constructing the bug library, which is the collection of common errors made by the students in a particular domain (Sison,1998). This implies that the background knowledge is a critical component in student modeling construction. Indeed, few systems are able to construct and extend bug libraries. The second component, which is the student behavior can be classified as simple and complex (Sison, 1998). Simple behavior involves classification and complex behavior involves problem solving task. Traditionally, in domains wherein the student behavior is simple (subtraction), student modeling systems used multiple behaviors as the primary input. This is known as synthetic student modeling. One drawback of this is that, the modeling system might construct a student model that is too general (incorrect covering of certain behaviors) or too specific (missing certain behaviors). To overcome this problem, student modeling systems computes a generalization of these behaviors, synthesizing elements from the background knowledge. The synthesized generalization makes up the student model. This is known as the synthetic approach. Moreover in domains which deal with complex behavior ( computer programming) student modeling systems utilized single behavior. This is necessary because it is not practical to assign multiple problems to students and analyze the errors after the errors have been turned in. This is known as analytic student modeling. One drawback of this is that the system might construct a student model that is too specific. Multiple knowledge errors might be entangled in a single behavior,
1

thus missing certain behaviors. These systems adopted a transformational approach that uses background knowledge to transform the behavior to the desired behavior or to verify if they are equivalent or not. The specific operators needed to perform the transformation makeup the student model. A modeling system can of course construct a student model by utilizing both behaviors. Among the schemes that fall under this approach is MEDD (Sison, 1998). MEDD explores a multi strategy approach by inducing knowledge from a single behavior with the help of the knowledge learned from multiple behaviors. It utilized a machine learning technique called unsupervised inductive learning that automatically constructs and extends bug library for novice (PROLOG) programming in an incremental manner while constructing the student model. It will be noted that prior to MEDD, unsupervised inductive learning has not been explored in student modeling and bug libraries of most student modeling systems were all built in. MEDD has two architectural components, the first one of which performs an intention based identification of behavioral discrepancies and the second of which performs the subsequent multi strategy clustering of these discrepancies into coherent classes that denote knowledge level errors. However, the goal of an intelligent tutoring system is not only to detect errors (bugs) and construct a representation of student’s knowledge (student model) but also to support learners in their problem solving tasks. Information about the student model enables the systems to facilitate its tutorial and remediation activities. (Weber, 1994) and (VanLehn, 1987) suggests several ways of utilizing or putting into use the student model and that is for the advancement of the students by offering unsolicited advice, by providing examples, problem generation and adapting explanations for computer based education systems. Similarly (Holt, et al, 1991) proposes that the student model can be used in various ways. That is to provide the learner with help or advice, to provide feedback to the learner or to interpret the behavior of the learner.

It is widely accepted that programmers utilized some form of case based reasoning when producing program that satisfies a specification. Indeed, studies on supporting novices such as (Weber, 1995) and (Brna and Good, 1996) have used case based reasoning approach in retrieving similar cases/ examples for presentation to students. Such reasoning draws on the relatively limited experience novice programmers have about the programs that can be found as examples in textbooks as well as the programs that the novice has written. However the fact that novices should be provided with assistance when they generate buggy/erroneous program has not
2

received sufficient attention from these systems. Novice programmers frequently make mistakes ( Eyseck and Keane, 1990) and to provice them with cases/examples is clearly important. Studies show that removing examples from manuals reduces comprehension by half(Mittal and Paris,1996). In like manner, several studies showed that students prefer examples when they received procedural descriptions for solving problems. (Redmiles, 1988) observed that students learning procedures can use and generalize from even a single example, relying on heuristics and some background knowledge. (Pirolli and Anderson, 1985) observed that examples played a crucial role for students learning LISP concepts. In another study, (Kessler and Anderson, 1986) noted variability in learner’s performance and concluded that when relying on examples, learners must be able to construct correct mental models of examples. Similarly,(Chi, et al, 1989) observed that students learn well from examples, provided that they are careful to explain the example to themselves as they learn.

The student model represents one of the most important features which an intelligent tutoring system or knowledge based help system needs to provide in order to direct its tutoring process. Indeed, the usage of student models for tutoring/remediation purposes has been an area of major research concern. Thus, this research deals with supporting novices by retrieving similar buggy/ erroneous program using MEDD’s student model.

1.2 Research Objectives 1.2.1 General Objective The research aims to use MEDD’s student model to retrieve similar buggy programs in a case library of student programs. 1.2.2 Specific Objectives In order to meet the general objective, the research aims

1. To define the representation of buggy programs of students in a case library, as well as the mechanism for organizing and indexing these cases. 2. To develop a mechanism for retrieving similar cases 3. To create a prototype that will validate the retrieve cases
3

1.3 Scope and Limitations The research will focus on buggy program matching where searching for a similar buggy program in a case library using MEDD’s student model is performed. Similarity search will be based in terms of the knowledge level error (s) that the program exhibits. The system will search the case library for a single case that matches or partially matches. In situations where multiple cases match the current input, the system will choose one of the cases at random. Possible technique can be used if the case cannot be classified which implies that the student model may not be complete. ( John Self , 1994) defends that fact that a student model may not be completely accurate in order to become useful. Still, if the case library does not have a sufficient similar buggy case, the system will warn the user that no similar cases could be found. (Aha, et al, 1999) asserts that it is more appropriate to do so in order to prevent recalling inappropriate cases. It will not show empirical results on its educational impact such as achievement and affective results. It is not concerned with the acquisition, understanding, performance, retention and transfer of student’s knowledge and skills. In terms of affective measures, it is not concerned with the attitudes and emotions, which may impact upon students as well as the academic and achievement motivation. 1.4 Significance of the Study

This thesis considers the multi functionality of the student model. Not only it represents the student’s knowledge/belief but also it can be utilized further, and that is to retrieve a similar buggy program. The retrieved buggy programs can be posed as questions to be given to students like sessions to repeat or further problems to be solved for adaptive testing. The buggy cases stored in the Case Library could be utilized by a tutorial component of an Intelligent Tutoring System as feedback in form of error – specific examples for the students to analyze. Using the student model will not only lead to a better functionality of the system but instructional designers, teachers and students will benefit from these cases.

4

1.5 Research Methodology

Phase 1.

Review of Related Literature. A study of the existing papers on supporting learners/students and other related fields were performed. The papers were analyzed and compared to come up with advantages and disadvantages.

Phase 2

Project Design. In this phase, the proposed data structures of the case library and the organization of the cases as they enter the Case Library were specified. This also involves the identification of the indices that were used in organizing the buggy cases. Furthermore, the proposed similarity metrics in retrieving similar buggy cases were also specified.

Phase 3

Implement a Prototype. A Computer Program using C Language was implemented. The program includes the procedures for constructing the MEDD’s error hierarchy, the construction of the Case Library, the retrieval for similar buggy programs and the application of the heuristic in the retrieval process.

Phase 4

Analysis and Observation. In this phase, the appropriate data structure for building the case library and the appropriate index for organizing the cases were specified. Furthermore, the appropriate similarity metric in retrieving similar buggy cases was also identified. The results were evaluated, analyzed and discussed. Moreover the reasons for obtaining the results were illustrated and described.

5

Chapter 2 RELATED LITERATURE This chapter presents some of the related works that have been done by several researchers in supporting novices by retrieving similar cases. 2.1 Episodic Learner Modeling ELM is an intelligent programming environment that supports novices in learning the programming language LISP. It adapts to the particular learner and is able to provide examples and reminding form the learner’s individual programming history. It defines examples as preselected materials derived from textbooks while reminding are previous function definitions that the learners encoded already. Its diagnostic component uses an episodic user modeling approach wherein the learner’s solution is analyzed recursively, searching for every concept, plan and rules that was used until it matches a function name, a parameter and a constant. This diagnosis results in a derivation tree which shows all the concepts, plans and rules that was used by the learner. Episodic frames are then created into the knowledge based and contain strategies (concepts, plans and rules that were used by the learner) as instances of these concepts. This entire set of episodic instances constitutes the episodic learner model. These are then the basis for retrieving previous cases by its retrieving mechanism called Explanation Based Retrieval. EBR’s involves applying two similarity metrics, organizational similarity and (Tversky, 1977) contrast model. Though it empirically shows that retrieved examples and reminding are judged by the experts as useful for an individual learner, its retrieval mechanism is an elaborate and labor intensive approach. Considering that it was designed for novice programmers, it is not clear on how it deals with buggy programs which is very common condition encountered from beginners and in which designers of support systems should all be aware of. 2.2 TED TED is a Prolog Techniques Editor designed to support novices by providing facilities to retrieve cases using an intermediate description language. They observed that novices are inclined to focus on the surface features only and that an intermediate description language is necessary to
6

help students look at the program structure in a deeper way. This is the index used for the case retrieval. The retrieval mechanism imposes three constraints which constitutes the intermediate description language. The students have to follow a given sequence of task. They are required to specify the arity (number of arguments) of a predicate and indicate the type (atom, number or list), the mode (input or output) and the statements describing the relationships between pairs of arguments in the top level call which is optional. A similar case is retrieved from the case library which consists of program solutions from 32 different task. The very simple intermediate description ;language are worthwhile, easy to learn and reasonably effective in producing a set of useful cases. TED’s advantage is that information currently used to retrieve cases is relatively quick to generate. Although, Weber has produced some impressive results for the automated selection of the best case, it is done at the cost of producing quite an elaborate task analysis for each program. TED’s drawback however is that its retrieval mechanism was built on the assumption that novices will input the correct constraints ( at least for the two) in order to retrieve similar case. Through constraints are selected via pull down menu, the third constraint imposes burden upon the students. They would most likely to have a hard time finding the right description of the relationship that must hold between arguments. Findings of Weber states that placing students in a rigid task like this, that is forcing students to reflect on their problem solving process has not been effective up to now. Weber refers to the following learning environments by (Linn, 1992; Recker and Pirolli 1992; Self 1992). Again, there is no specific mention on how it deals with an incorrect description language. An incorrect description language means that student’s misconception exists but since there is no diagnostic component that will determine the cause, it will be hard to decide which knowledge needs repair. We argue that it is important to determine the student’s knowledge/misconception (student model) before we allow provisions that will support them. It is ironic to know that assistance is usually provided to novices when they know the correct program but more importantly if they have the erroneous program. Up to now, no steps has been undertaken to solve this problem which is providing assistance to novices when they generate erroneous program. 2.3 MEDD We will briefly describe MEDD, as this research utilized MEDD’s student model in order to retrieve similar buggy programs. The following are taken from (Sison, 1998). MEDD’s target
7

and implementation language is Prolog. (Programming in Logic.) i.e. its inputs and outputs are Prolog programs and in itself implemented in Prolog. Thus, we take a moment to briefly and informally describe the components of Prolog programs. A Prolog program is a sequence of sentences, each of which comprises a head and a body. The head either consists of single goal or is empty. The body consists of a sequence of zero or more goals (also called subgoals). If the head is not empty, the sentence is a clause; otherwise it is called a directive, of which the query is the most important kind. If the body of a clause is empty, the clause is called a unit clause, also called a fact. A non unit clause is called a rule. A goal ( also called literal or procedure call ) is a kind of term. A term can be a constant ( which denotes particular individuals such as numbers and atoms), a variable (which denotes a single but unspecified individual), or a compound term. A compound term comprises a functor and a sequence of one or more terms called arguments. A functor is characterized by its name, which is an atom and its arity, number of arguments. Terms are ground if they contain no variables; otherwise they are nonground. The predicate for a functor is the sequence of clauses in the program whose head goals have that functor. This sequence of clauses is also called a predicate definition, paragraph, procedure, or procedure definition. Prolog programs have a declarative and procedural semantics. Informally, a non unit clause P: Q, R where P is the head and Q and R are subgoals which make up the body of the clause, can be read declaratively as P is true if Q and R are true. The primary inputs of MEDD are novice Prolog programs, i.e. Prolog programs written by students learning the language. These programs include reverse/2 for reversing the order of list of elements and sumlist/2 programs for summing the elements of a list of integers. Gegg Harrison (1994) identified this suite of recursive list processing Prolog programs that he empirically showed to be sufficient for novice Prolog programmers. This suite is composed of length/2, sum/2, product/2,count/2, enqueue/3, append/3 and reverse/2. The most complex program is the suite reverse/2. MEDD uses two algorithms, namely MDD (Multi strategy Discrepancy Detection ) and MMD (Multi strategy Misconception Discovery). The first algorithm, MDD involves an intention based diagnosis wherein it determines the intention underlying a student program and computers in the process the discrepancies between the student and reference programs and then checks whether these discrepancies are associated with known knowledge level errors . If not, MDD tests the student program to help confirm whether the syntactic discrepancies between the student and reference programs are indeed behavior level
8

errors or stylistic differences that result in correct program variants. Behavioral discrepancies can be expressed naturally in the form of atomic formulas in the function free first order logic. MDD outputs are the discrepancy set, which is taken as input by multistrategy clusterer, MMD) MMD takes the discrepancy set as one input object and clusters it into hierarchy of errors and error classes, incrementally revising the error hierarchy in the process. Each error hierarchy is composed of subtrees where each subtree forms an intentional definition of a class of errors. Currently, the system uses four predicates or operators, namely remove,add,replace and switch to express behavioral discrepancies. (Sison, 1998) suggests several ways of utilizing or putting in to use MEDD’s student model. The intentional definition of knowledge level error detected in a student program can be used to provide hints to the student, select or construct similar buggy programs to analyze or to solved.

9

Chapter 3 THEORETICAL FRAMEWORK This chapter presents the theories and concepts related to student modeling and the techniques and strategies of utilizing a student model. It also includes the important concepts that relevant to this research. 3.1 Student Modeling Intelligent tutoring systems and knowledge based help systems study the use of AI techniques in computer aided instruction. An important aspect concerns building a model of the student’s current understanding of the domain and to integrate this model of human cognition or the so called student model into the general knowledge based. This would direct the remediation process of these systems. 3.1.1 Student Model (Sison, 1998) defines student modeling as the construction of a qualitative representation called a student model, that accounts for student behavior in terms of a system’s background knowledge. It is an approximate, possibly partial, primarily qualitative representation of student knowledge about a particular domain that fully or partially accounts for specific aspects of student behavior. These three – the student model, the student behavior, and the background knowledge – are the essential elements of student modeling. 3.1.2 Student Behavior (Sison, 1998) Student behavior refers to a student’s observable response to a particular stimulus in a given domain that servers as the –primary input to a student modeling system. This input can be an action (e.g. writing a program) or more commonly the result of that action (e.g. written program). By describing or explaining the behavior, we have to identify the specific relationship between the input behavior and the system’s background knowledge. Such relationships can be analyzed at various levels. At the most basic level, which we call the behavior level, relationships, particularly discrepancies (i.e. syntactic mismatches) between actual and desired

10

behaviors are determined. We call these discrepancies behavioral errors when they occur in incorrect behavior. 3.1.3 Background Knowledge The background knowledge comprises the network of correct facts, procedures, concepts ,principles, schemata and strategies of a domain which is collectively called the theory of that domain and the misconceptions held and other errors made by a population of students in the same domain which is collectively called the bug library. It is generally impossible to have a complete bug library because it would be unattainable say for example, to enumerate all the correct rules for arithmetic or high school algebra, it is difficult if not impossible to enumerate all the misconceptions and other errors that the students may possibly have. In spite and because of the difficulty of having a complete bug library, very few systems have the capability to automatically extend,let alone construct their bug libraries. Specifically only PIXIE (Hoppe, 1994) and ASSERT (Mooney, 1996) can extend their background knowledge. 3.2. How to make use of a Student Model In most systems the reason for a student model is to allow instruction to be designed specifically for an individual learner. Without the student model component, an ITS would be like a human tutor who knows nothing about the individual learner, and therefore is unable to adjust instruction to changes in the learner’s behavior. With knowledge about the learner, the tutor can control the order and difficulty of the material presented to the learner, as well as provide appropriate remediation. Thus the student model is used to assist in selecting the content, selecting the tutorial strategy and confirming diagnosis. Diagnosis is the process of inferring the student model. The student model and the diagnostic are tightly interwoven. One can think of the student model as the data structure representing the learner’s knowledge and diagnosis as the process that manipulates this knowledge. Designing these two components together represents the student modeling problem. The choice of student model significantly effects what can be accomplished in terms of its diagnosis. (VanLehn, 1987) outlines several common uses of student model

11



Advancement: The student model can represent the learner’s level of mastery. Periodically, the system examines the student model for the level of mastery of the current concept or topic, weighs the results and decides whether or not to advance the learner to the next concept or topic.

 

Offering unsolicited advice: By knowing the learner’s state of knowledge, the system can decide to offer advice to the leaner when deemed appropriate. Problem generation: In order to generate appropriate problems or tasks, i.e., those just beyond the learner’s current abilities, the problem generation module consults the student model to determine the learner’s current capabilities.



Adapting explanations: In order for a system to issue instructive explanations to the learner, it consults the student model to determine what the learner already knows. Thus the system will restrict its explanations to those involving concepts the learner already understands.

3.3 Machine Learning Machine Learning is the induction of new, or the compilation of existing knowledge, which in turn may lead to improvements in the performance of a task. Most machine learning research has focused on improving accuracy in classification task but research has also been done on improving efficiency and accuracy in problem solving (Sison, 1998). 3.4 Heuristic (Luger, 1993) A heuristic is a strategy for selectively searching a problem space. It guides our search along lines that have a probability of success while avoiding wasted or apparently stupid efforts. Human beings used a large number of heuristics in problem solving and do not use exhaustive search. A heuristic is only an informed guess of the next step to be taken in solving a problem. It is often based on intuition and experience. It can lead a search algorithm to a suboptimal solution or fail to find a solution at all. This is an inherent limitation of heuristic search. A heuristic is a technique that improves the efficiency of a search, while possibly sacrificing claims of completeness. However, if we cannot perform an exhaustive search, we are not necessarily doomed to failure. Creative heuristics tests can be a great deal of help. The use of
12

heuristic is an existing algorithm may enable it to quickly solve a large instance of a problem, provided the heuristic works on that instance. In this study, a heuristic was utilized and evaluated whether there is an improved performance if the heuristic was applied.

13

Chapter 4 CASE LIBRARY CONSTRUCTION This chapter describes the data structures and algorithms employed in building the case library as well as the mechanism of organizing the buggy cases as they enter the case library. C programming language is used for the implementation. The selection of the programming language is based on the facilities it provides for implementing hierarchical data structures like trees and graphs. Because of their generality and flexibility, trees and graphs are powerful data structures that prove valuable in more advanced applications. They provide an excellent way to describe essential features of many applications, thereby facilitating specification of underlying problems and formulation of algorithms for their solutions. (Kruse, 1989). These structures do not represent physical memory locations whose data may be stored and later on retrieved. A certain amount of memory should be allocated whenever a structure is created to hold all necessary information. The standard library function malloc performs this task. It allocates the exact number of bytes needed for a given structure and returns a pointer that is then used to reference memory block just allocated. 4.1 COMPONENTS OF THE CASE LIBRARY The primary components in building the case library for sumlist/2 and reverselist/2 are MEDD’s error hierarchy, the buggy Prolog programs of the students and the discrepancies of the behavior level errors of these programs. 4.1.1 MEDD’s Error Hierarchy MEDD’s error hierarchy for reverselist/2 and sumlist/2 is composed of subtrees, which form the intensional definition of the knowledge level errors of the students. Each particular node in the error hierarchy is referred to as an error_node. Each error_node represents one component in a file of structure. Each component is referred to as a node_structure which contains the behavior level errors that form the knowledge level errors. Table 4-1 provides a description of each member or component of this node_structure. Notice in Table 4-1 that a pointer to an edge is also
14

included as one of the member components. These edges were used as the difference links in building the case library. To properly keep track of every error_node in the error hierarchy, parent, first_child, next_childptr and left_ptrs are assigned to each error_node. The parent pointer always points to parent node, the first child pointer points to the first node, the next child pointer points to the next sibling node and left pointer points to the previous node as shown in Figure 4-1.

Root First child pointer parent pointer Left pointer Replace(subgoal2,[H],H) Replace(base,e2,_) Next child pointer

Figure 4-1 Nodes in MEDD’s student model

Member Behavior Counter Node_name Parent_name Parent_ptr,first_childptr Next_childptr,left_ptr Ptr_arcnode

Description array that stores the behavior level errors represents the number of behavior level errors represents the name of the node represents the name of the node links in the structure used to keep track of the nodes pointer to an edge

Table 4-1 Structure Member Description for node_structure in MEDD’s error hierarchy The algorithm that will be presented describes the construction of MEDD’s error hierarchy in the reverselist/2 and sumlist/2 data set, which is one of the components in building the case library.
15

The algorithm accepts one parameter, which is the File of node_structure in the error hierarchies of reverselist/2 and sumlist/2. Algorithm 4.1.1.1
Input : File of node_structure in the error_hierarchy of reverselist/2 and sumlist/2

Output : MEDD’s error hierarchy Build_Errorhierarchy(File of node_structure) Headptr null

While we have not read the entire file of node_structure Begin Read node_structure and create a new error_node Copy all the information of the node_structure to the newly created error_node Make the necessary connections to the links of the error_node If the parent name of the error_node is null then Let headptr points to that error_node Else Begin Search for the error_node’s parent name using inorder traversal If found then Make the necessary connections to the links of the error nodes End End While

The algorithm starts with initializing the variable headptr which will later points to the root node of the error_hierarchy. The construction of MEDD’s error hierarchy starts with reading each node_structure from the file of structure and creates a new error_node each time a node_structure is read. All the information stored in the node_structure is copied to the newly constructed error_node and the necessary links or connections to the parent,first_child, next_child,left_ptr are made. Suppose that the parentname of the error_node is not null, then it searches of the error_nodes’s parent name using non recursive inorder traversal and once the error_node is found, the necessary links or connections are again made.

16

4.1.2 Buggy Prolog Programs Buggy Prolog programs are incorrect programs in the reverselist/2 and sumlist/2 dataset as shown in Figure 4-2. The figure also shows the corresponding behavior level errors or the discrepancies in the programs.

The reverselist/2 buggy programs shown in Figure 4-2 is erroneous and identified by the experts as knowledge level errors known as invalid append. Thus an invalid append is a knowledge level error which is composed of these two behavior level errors namely :remove(subgoal2) and replace(head, R,[T1] | H). Similarly, the sumlist/2 buggy program shown in Figure 4-2 is erroneous and identified by the experts as a knowledge level error known as lack of background knowledge – introducing variables in the program. In like manner, introducing variables in a program is a knowledge level error which is composed of two behavior level errors, namely: switch(subgoal1,subgoal2) and replace(subgoal1,T1, S).

Reverselist Buggy program Reverse([ ],0). Reverse([H],T,[R|H]):Reverse(T,R)

Discrepancies

remove(subgoal2) replace(head, R,[T1|H])

Sumlist Buggy program Sumlist ([ ] ,0). Sumlist([H|T],S):S=H+S Sumlist(T,S)

Discrepancies

switch(subgoal1,subgoal2) replace(subgoal1,T1,S)

Figure 4-2 Examples of Buggy Prolog Programs Each buggy program in the reverselist/2 and sumlist/2 dataset together with the corresponding behavior level errors and number of goals and variables are stored in a file of structure. This buggy program is referred to as buggy_structure. Table 4-2 provides the member components of this buggy_structure.

17

Member Behavior Buggy_line Counter Goals Variables

Description array that stores the discrepancies array that stores the buggy program per line represents the number of discrepancies represents the number of goals represents the number of variables

Table 4-2 Structure Member Description for Buggy_Structure

Buggy_structures actually become the buggy cases as they enter the case library. Table 4-3 describes the member components of each buggy case. This buggy case is referred to as buggy_node in building the case library.

Member Behavior Buggy_line Counter Ptr_to_case

Description array that stores the behavior level errors array that stores the buggy program per line represents the number of behavior level errors pointer to next case

Table 4-3 Structure Member Description for Buggy_node

4.2

DATA STRUCTURE

The selection of the appropriate data structure in building the case library contributes to the overall performance of the retrieval process. This research investigates the following data structures in building the case library and selects later, on what is more appropriate data structure for the case library.

18

4.2.1 MEDD’s Error Hierarchy and Graphs of Buggy Cases It has been mentioned earlier that one of the components in building the case library is MEDD’s error hierarchy that describes the intentional definition of knowledge level errors. Thus, the first scheme utilized the leaf nodes of the error hierarchy with edges that maintain the buggy cases as shown in Figure 4-3. Edges are directed graphs that point to buggy cases. Thus, the case library is composed of two interconnected substructures: a tree of error hierarchy produced by MEDD and the graphs that point to a linked list of buggy cases.
Replace(subgoal1, T1,R) Replace(subgoal2, T1,X?)

leafnodes X?=R X?=H

graphs that point to buggy cases linked list of buggy cases

Figure 4-3 Error hierarchy and graphs of buggy cases 4.2.2 Linear Representation of MEDD’s Error Hierarchy and Graphs of Buggy Cases The second scheme utilized a linear representation of MEDD’s error hierarchy whose edges also maintain the buggy cases as shown in Figure 4-4. This linear representation is referred to as a Linear_list. The case library is again composed of two interconnected substructures: a linear representation of MEDD’s error hierarchy (Linear List) and graphs of buggy cases.

19

Replace(subgoal1,T1,R) Replace(subgoal2, T1,R)

Replace(subgoal1, T1,R) Replace(subgoal2, T1,H) Graphs of buggy Cases Linked list of Buggy cases

Figure 4-4 Linear Representation of MEDD’s error hierarchy and graphs of buggy cases The data structure for each member node in the Linear List that is referred to as linear node is shown in Table 4-4. It is basically the same with that of the hierarchical structure shown in Table 4-1 except that parent pointers and first child pointers are omitted.
Member Behavior Counter Node_name Parent_name Next_childptr,left_ptr Ptr_arcnode Description array that stores the behavior level errors represents the number of behavior level errors represents the name of the node represents the parent name of the node links in the structure used to keep track of the nodes pointer to an edge

Table 4-4 Structure Member Description for linear_node The algorithm that will be presented describes the Linear list implementation of MEDD’s error hierarchy. It accepts one parameter that is the error hierarchy of reverselist/2 and sumlist/2.

20

Algorithm 4.2.2.1

Input : MEDD’s error hierarchy Output : Linear List Build_Linear_list (MEDD’s error hierarchy) Traverse the error hierarchy using inorder traversal For each leafnode € error hierarchy Begin Create a new linear_node Traverse the path from the leaf node up to the root node For each node being traversed Copy the behavior level errors of the node to the linear_node End For Remove the instantiation operator and replaced with the indicated literals End For

The procedure starts with traversing the error hierarchy using inorder traversal. For each leaf node being traversed, a new linear_node is created and the path is traversed from each leaf node up to the root node. For each node being traversed along the path, all the behavior level errors are copied to the newly constructed linear_node. And after the traversal from each particular leaf node to the root node is completed, the instantiation operator X? is removed and is replaced with the indicated literals, as illustrated in Figure 4-5.

Replace(subgoal1, T1,R) Replace(subgoal2, T1,X,?)

X?=R

X?=H

Replace(subgoal1, T1, R) Replace(subgoal2,T1,R)

Replace(subgoal1,T1,R) Replace(subgoal2,T1,H)

Figure 4-5 Illustration of a Linear_list creation
21

4.3 ORGANIZATION Building the case library does not only consists of just compiling these buggy cases, but also to find ways to organized these buggy cases so that they can be retrieved successfully. The organization of stored knowledge greatly affects retrieval and reuse costs. (Veloso, 1994). The leaf nodes of the Error_Hierarchy and the Linear_list contain edges that differentiate the buggy cases that they linked. This research investigates the ability of the edges or difference links to organized the buggy cases in the case library. Two of the three algorithms employed the knowledge level error as the primary index in building the case library and the edges/difference links as the secondary indices. Two types of labels for the edges were used in building the case library and these labels are behavior level errors and number of goals and variables.

4.3.1 MEDD and Behavior Level errors The buggy cases are doubly indexed and utilized MEDD and Behavior level errors in organizing the buggy cases. An example is shown in Figure 4-6. The labeled edge in the Figure contains the behavior level error replace(subgoal2,[H],H).

Replace(subgoal1,T1,R) Replace(subgoal2,T1,X,?)

X? = R

X? = H

Replace(subgoal2,[H],H) Labeled Edge

Label name

Figure 4-6 Example of an edge labeled with behavior level error
22

Labels are obtained by determining the difference behavior between the behavior level errors in the input buggy program and the knowledge level error. Thus, if the behavior level errors in the input buggy program are and replace(subgoal2,[H],H), the knowledge then the edge replace(subgoal1,T1,R) level will be error labeled and is as

replace(subgoal2,T1,R)

replace(subgoal1,T1,R),replace(subgoal2,T1,R)

replace(subgoal2,[H],H). Table 4-5 provides the description of the edges labeled with the behavior level errors.
Member Behavior Counter Ptr_arc Ptr_case Number_cases Description array that stores the behavior level errors represents the number of behavior level errors pointer to next edge pointer to first case represents the number of cases

Table 4-7 Structure Member Description for Edges labeled with Behavior level errors 4.3.2 MEDD and Number of Goals and Variables The buggy cases are again doubly indexed and utilized MEDD and number of goals and variables in organizing the buggy cases. An example is shown in Figure 4-7. The labeled edge in the Figure contains the label 4 and 4 represents the number of goals and variables.

Replace(subgoal1,T1,R) Replace(subgoal2,T1,X,?)

X? = R

X? = H

4,4 label name Labeled edge

Figure 4-7 Example of an edge labeled with number of goals and variables

23

Labels are obtained in a straightforward manner. If a buggy program has 3 goals and 4 variables then the edge is labeled with the same number of goals and variables with that of the input buggy program, which is 3 and 4 respectively.

Table 4-6 provides the description of the edges labeled with number of goals and variables
Member Goals Variables Ptr_arc Ptr_case Number_cases Description represents the number of goals represents the number of variables pointer to next edge pointer to the first case represents the number of cases

Table 4-6 Structure Member Description for Edges labeled with number of goals and variables 4.3.3 Number of Goals and Variables The third scheme utilized the number of goals and variables as the primary index. An example is shown in Figure 4-8. In the figure , it can be seen that the buggy cases are no longer doubly indexed. A linked list of nodes with the corresponding number of goals and variables points to the buggy cases. This linked list of nodes is referred to as List_of goals_and_variables.This linked list of nodes represents the primary index that maintains the buggy cases. Each node in the List_of_goals_and_variables is referred to as an index_node. Table 4-7 describes each member component of this index_node.

No of goals : 3 No of variables : 4

No of goals: 3 No of variables: 3 Linked list of buggy cases

Figure 4-8 Illustration of buggy cases whose primary index is number of goals and variables

24

Member No_of_goals No_of_variables Ptr_case

Description represents the number of goals represents the number of variables pointer to the buggy case

Table 4-7 Structure Member Description for an index_node 4.4 Similarity metric The choice of similarity metric affects the accuracy of the retrieved cases. This research investigates the suitability of the two similarity metrics in retrieving similar buggy cases. 4.4.1 MEDD’s Similarity Metric The first similarity metric utilized MEDD’s knowledge level error. The similarity metric is done based on the commonalities and differences in the buggy program and every knowledge level error in the error hierarchy. To measure the degree of similarity/dissimilarity, we adopt Tversky’s contrast model. This is the same model that (Weber, 1996) and (Sison, 1998) has adopted in building the student model which is expressed as Sim (C,0) = Ɵ ( C Ω O) – α f ( C – O) – β f ( O – C) which expresses the similarity between two objects C and O that are each expressed as a set of features as a function of the weighted measures of their commonalities ( C Ω O) and differences (C – O, O – C) and with f(X) implemented as the cardinality of X. Values for Ɵ, α and β are 1.0, 0.5 and 0.5 respectively. The system threshold is equivalent to 0. These are the same values used by (Sison, 1998) in his simulation. If the computation exceeds the system’s threshold, it may indicate that there is a best match or a partial match. A match is considered a best match if the behavior level errors in the node are exactly the same with that of the input set as shown in Figure 4-9.
Replace(subgoal2,[H],H) Replace(subgoal1, T1, R) Replace(subgoal2, T1, R) Best match input set

Replace(subgoal2,[H],H) node

node

Figure 4-9 Example of a best match
25

A match is considered a partial match if an intersection and a variabilization generalization occur. To better understand what intersection and variabilization generalization means. Let us recall one of the knowledge level errors in the error hierarchy MEDD has produced. This is shown is Figure 4-10.

Replace(subgoal1, T1,R) Replace(subgoal2,T1,X,?)

X? = R

X? = H

Figure 4-10 One knowledge level error in the Error Hierarchy This indicates that an intersection and variabilization generalization occurs. MEDD generalizes the nodes shown in Figure 4-10, when in the past, MEDD contains the node shown in Figure 411.
Replace(subgoal1, T1, R) Replace(subgoal2, T1,R)

Figure 4-11 Node in the Error Hierarchy And it encounters the discrepancies replace(subgoal1,T1,R) replace(subgoal2,T1,H)

Note that the first behavior level error Replace(subgoal1,T1,R) exactly matches so an intersection generalization is found. The second behavior error in the discrepancy set is replace(subgoal1, T1, H) and is compared with replace(subgoal1,T1,R) in the node. This time a variabilization generalization occurs, an instantiation operator X? is substituted in the third

26

parameter replace(subgoal1,T1,X?). This indicates that the third parameter in the literal can assume the values of R and H.

4.4.2 Number of goals and variables The second similarity metric is based on the number of goals and variables of the buggy programs. If an input program has 3 goals and 4 variables, a buggy program with the same number of goals and variables is retrieved.

4.5 CASE LIBRARY CONSTRUCTION OF REVERSELIST/2 AND SUMLIST/2 BUGGY PROGRAMS

The first algorithm in building the case library utilized the data structures described in Section 4.2.1 and 4.2.2 respectively. Also shown are 2 inputs, Input1 and Input2. Input1 indicates the input that is used if the case library utilized the Linear_list, otherwise, Input2 is used if the case library utilized the error hierarchy. Inorder traversal is utilized if the algorithm used the Error Hierarchy, otherwise it is omitted if the data structure that is used is the Linear List.The similarity metric that is used is MEDD’s similarity metric.

Let us recall that the knowledge level errors produced by MEDD for the reverselist/2 and sumlist/2 are the primary indices for organizing the buggy cases and the difference links or the edges are the secondary indices employed for these cases. The algorithm utilized the two types of labels for edges.

27

Algorithm 4.5.1
Input 1 : File of buggy_structure Linear List : File of buggy_structure Linear_list MEDD’s error hierarchy : Case Library

Input2

Output

Build_CaseLibrary(File of buggy_structure,Linear_list,MEDD’s error hierarchy) Ptr_kle Ptr null head of Linear_list

While we have not read the entire file of buggy_structure Begin Read one buggy_structure Create a new buggy_node and copy all the information of each buggy_structure in the Newly created buggy node Ptr ptr_kle , match_all 1 While ptr_kle <> null ^ match_all Begin Repeat Value = compute_similarity(discrepancies of buggy node,ptr_kle) Ptr_kle (ptr_kle  next_childptr) Until value = 1 ^ bestmatch Leafnode = traverseleafnode(subtree,leafnode_name) Insert_case(leafnode, buggynode,ptr_kle) Match_all = Check_status(discrepancies of buggy node) Skip all sibling nodes End while End while

Building the case library starts with initializing the variable ptr_kle that keeps track of each linear node in the Linear_list. Another variable ptr is initialized that steadily points to the first node in the Linear_list. A new buggy_node is created each time a buggy_structure is read from the file. All the information in the buggy_structure is copied to the newly created buggy_node, this now becomes the buggy case. It then computes the similarity metric for each linear_node in the Linear_list.If the computation exceeds the system’s threshold and a best match is found, then the procedure Traverseleafnode will traverse the corresponding subtree using inorder traversal
28

and will search for the corresponding leafnode.Once found, it will then return the actual leaf node in the error hierarchy . The procedure Insert_case inserts the buggy case in the given leaf node in the appropriate list of buggy cases. Finally, Check_status checks if all the discrepancies in the buggy structure are classified. If not, it will skip all sibling nodes in the Linear_list which partially matches. This is done because we want the buggy case to be indexed by the leafnode that it best matches. Sibling nodes are nodes that have the same knowledge level error. An example is shown in Figure 4-12. Then,it will continue to classify all the discrepancies in the buggy case until such time that the classification is completed.

Replace(subgoal1,T1,R) Replace(subgoal2,T1,R)

Replace(subgoal1,T1,R) Replace(subgoal2,T1,H)

Figure 4-12 An example of sibling nodes

The first algorithm requires the procedures Compute_Similarity, Traverseleafnode and Check_status. These three will be discussed shortly in details.

The first procedure that will be introduced is Compute_Similarity.It requires two parameters, namely: the discrepancies of the buggy case and the pointer to knowledge level error (ptr_kle) which is the linear node.

29

Algorithm 4.5.1.1

Input

:ptr_kle -represents the linear node Discrepancies of buggy case - represents the discrepancies in the buggy case

Output: integer which indicates that it exceeds the system’s threshold Compute_Similarity (discrepancies of buggycase, ptr_kle) Commonalities 0 Diffco 0 Diffoc 0 For each discrepancy € discrepancies For each behavior level error € ptr_kle If discrepancy = behavior level error then Commonality commonality + 1 End for End for Diffoc = discrepancies – ptr_kle Diffco = ptr_kle – discrepancies If ( commonality – 0.5 * (Diffoc) – 0.5 * (Diffco) > 0) then Return 1 Else Return 0

The algorithm describes the computation of MEDD’s similarity metric. Initially, the integer variables for commonalities and differences ( diffoc and diffco) are initialized. The computation of the similarity metric starts by comparing each discrepancy of the buggy case with each behavior level error of the given knowledge level error. For each equal member found, the variable commonality is incremented. After detecting all the commonalities, it will then compute the difference between the knowledge level error and discrepancies and the difference between the discrepancies and the knowledge level error. Finally, it will return 1 if the computation exceeds the system’s threshold, otherwise it will return 0.

The next algorithm that will be introduced is the algorithm for traversing the leaf node in the error hierarchy. This algorithm is actually an inorder traversal. It accepts two parameters, subtree

30

in the error hierarchy and a leaf node name. It searches the node’s name in the given subtree and if found returns the actual leaf node.

Algorithm 4.5.1.2

Input

: subtree of errorhierarchy Leafnode_name

Output: leafnode in the errorhierarchy Traverseleafnode(subtree, leafnode_name) Initialize Stack Set local variable ptr to root of subtree Repeat While ptr is not nil Push value for ptr onto stack If ptr_nodename=leafnode_name then return (ptr) Set ptr to left pointer of ptr End While If stack is not empty then Pop value for ptr from stack Set ptr to right pointer of ptr End if Until (stack is empty) and (ptr is nil)

The next algorithm that will be presented is the algorithm for inserting the buggy case in the Case Library. It accepts three parameters namely: the leaf node of the error hierarchy, the buggy case and the linear_node in the Linear_list.

Algorithm 4.5.1.3
Input : leafnode – leafnode in the error hierarchy : ptr_kle – represents a linear_node in the Linear_list Buggy case

Output : Appended Case in the Case Library

Algorithm 4.5.1.3 continued

31

Algorithm 4.5.1.3
Procedure Insert_case(leafnode, buggy case, ptr_kle) Compute the label name If leafnode has no edge then Begin Create a new edge and label the edge with the appropriate label name If the buggy case is classified for the first time then Insert the buggy case in the case library and let case pointer points to the buggy case Else let the edge’s case pointer points to the buggy case Endif Else Begin For each edge € edges in the leaf node If the label of edge = labelname then Begin If buggy case is classified for the first time then Begin For each case € cases in the edge If case = buggy_case then Disregard the buggy case and classify another input set End for If buggy case € cases in the edge then Append the case in the list of cases End if End if End for If labelname Ɇ label of edges then Begin Append a new edge in the list of edges and label with the label name If the buggy case is classified for the first time then Insert the buggy case in the case library and let case ptr points to the case Else let the edge’s case pointer points to the case Endif

The algorithm starts with computing the label name and checks if the leaf node has an edge.If an edge does not exists, it then creates the first edge of the leaf node, inserts the buggy case in the case library if the case is classified for the first time otherwise the edge will just point to the first buggy case. If a leaf node contains edges, it determines whether there exists a labeled edge that exactly matches the computed label name. If there is an exact match and it is the first time that the buggy case is classified, it then compares the current buggy case to the list of cases. If no
32

similar case is found, it is appended at the end of the list. Otherwise if a similar case is found, meaning both cases are exactly the same, the buggy case is disregarded and begins to classify another input buggy case. If after exhausting all the edges and no similar edge is found, it appends a new labeled edge in the list of edges and points to the buggy case if the buggy case is classified already, otherwise, the buggy case is inserted in the case library.

The last algorithm that will be introduced accepts 1 parameter , that is the discrepancies of the buggy case and checks if all the discrepancies in the buggy case are classified. It will return 1, if all the discrepancies are not completely classified otherwise it will return 0.

Algorithm 4.5.1.4

Input : discrepancies of buggycase Output : integer- which indicates if all the discrepancies are classified or not Check_status (discrepancies_buggycase) Flag=0 For each discrepancy Ɛ discrepancies in buggycase If discrepancy is not classified then Flag 1 return flag Return flag

We now proceed to the next scheme of building the case library. The algorithm that will be presented utilized the number of goals and variables of the buggy programs as the primary index in organizing the buggy cases. It accepts one parameter, which is the File of buggy structure.

33

Algorithm 4.5.2

Input : File of buggy_structure Output : Case Library Build_Case_library2(File of buggy_structure) Head null While we have not read the entire File of buggy_structure Begin Read one buggy_structure Create a new buggy_node and copy all the information of the buggy_structure to the buggy node If head= null then Begin Create a new index_node Copy the number of goals and variables of the buggy_structure to the new index node Let the index_node’s case pointer points to the buggy node Head index_node End if Else Begin For each index_node Ɛ nodes in the list_of_goals_and_variables If buggy_node(no_variables,no_clauses) = index_node(no_variables,no_clauses) Begin For each buggy_node Ɛ cases in the linked list of buggy cases If buggy_node = buggy case then Disregard the buggy node and classify another input set End for If buggy node Ɇ cases in the linked list of buggy cases then Append the buggy_node in the list of buggy cases End if End For Append a new index_node at the end of the List_of_goals_and_variables And copy the number of goals and variables and insert the buggy node End Else End while

Building the case library starts with initializing the variable head, this will keep track of the first node in the List_of_goals_and_variables. A new buggy_node is created each time a buggy_structure is read from the file. All the information stored in the buggy structure is copied to the newly constructed buggy_node which becomes the buggy case. Suppose that the variable head is not null, it then determines whether there exists an index_node which is similar to the number of goals and variables of the buggy case. If an exact match is found, it determines
34

whether there exists a case that is exactly the same with that of the buggy case. If a similar case is found then the buggy case is disregarded and will classify another input set. If after exhausting the index_nodes in the List_of_goals_and_variables exists, a new index_node is appended in the List_of_goals_and_variables and the buggy case is inserted.

35

Chapter 5

CASE RETRIEVAL

This chapter describes the algorithms in retrieving similar buggy programs as well as the identification of the similarity metrics being utilized. The first algorithm that will be presented utilized MEDD’s knowledge level errors for reverselist/2 and sumlist/2 as the similarity metric in retrieving similar buggy cases. This algorithm appends the unknown behavior level error/knowledge level error at the end of the error hierarchy. This algorithm utilized the case libraries whose data structures are describe in Section 4.2.1 and 4.2.2 and whose edges are labeled with behavior level errors and number of goals and variables, respectively.

Algorithm 5.1

Input 1

: File of buggy_structure Case Library (Linear_list) : File of buggy structure Linear_list Case Library ( MEDD’s error hierarchy) : Similar Buggy Case

Input 2

Output

Retrieve_Cases (File_of_buggy_structure,Linear_list, Case Library) Ptr head of Linear_list, ptr_kle null, value 0 While we have not read the entire file of buggy_structure Begin Read one buggy_structure Ptr_kle ptr, match_all 1 While ptr_kle <> null ^ match_all Begin Value = Compute_similarity (discrepancies_buggystructure, ptr_kle)

Algorithm 5.1 continued
36

Algorithm 5.1
If value = 1 ^ bestmatch then Begin If ptr_kle has sibling(s) then Begin For each sibling node Ɛ sibling nodes Retrieve(sibling node) Else Retrieve(node) End if Else if value = 1 ^ partial match then Begin (ptr_kle) (ptr_kle next_childptr) Repeat Value = Compute_Similarity (discrepancies_buggystructure,ptr_kle) Ptr_kle (ptr_kle -> next_childptr) Until (bestmatch) V (value = 0) If bestmatch then For each sibling node Ɛ sibling nodes Retrieve(sibling node) Else if value = 0 then Retrieve(last sibling node) End Else if Match all = Check_status (buggystructure) Ptr_kle (ptr_kle next_childptr) End while If ¬ (match_all) then Begin Display that no similar case is found Append the unknown knowledge level error at the end of the error hierarchy Endif End while

The algorithm starts with initializing pointer and integer variables. The variable ptr steadily points to the first node of the Linear_list. The next pointer ptr_kle keeps track of each node in the Linear_list. The integer variable value represents the computed value of the similarity metric and assumes the value of 1 if the computation exceeds the system’s threshold. The next integer variable match_all initially assumes the value of 1 which indicates that all the discrepancies of the buggy_structrue are not yet classified. The retrieval process starts by means of reading a buggy structure. It then computes the similarity metric between each knowledge level error in the Linear_list and the discrepancies of the buggy structure. It then computes the similarity metric between each knowledge level error in the Linear_list and the discrepancies of the buggy
37

structure. If the computation exceeds the system’s threshold and a bestmatch is found, then the Retrieve procedure retrieves the cases from each corresponding sibling nodes in the error hierarchy. Otherwise, if the corresponding node in the error hierarchy does not have any sibling, then the cases are retrieve from the node itself. Suppose that the computation exceeds the system’s threshold but only a partial match is found, it then searches for the knowledge level error that best matches. If after an exhaustive search and only a partial match is found, then Retrieve retrieves cases from the node where the last partial match is found. The procedure Check_Status finally checks if the behavior level errors or discrepancies of the buggy program are classified. Suppose that it is false, then it will display that no similar case is found which exhibits the unknown behavior level errors and will append a new node in the error hierarchy that contains the unknown discrepancies.

The first algorithm in retrieving similar buggy cases requires the procedure Retrieve that is responsible for retrieving cases in the edges of the leaf node. Algorithm 5.1.1 describes the steps involved in this procedure. It accepts one parameter that is the leaf node in the error hierarchy. The retrieval procedure starts with initializing two integer variable number and counter. A random number is generated that will determine the edge whose similar case will be retrieved. It then locates the target edge and once the target edge is found, a random number is generated again to determine which similar case will be retrieved. Finally, it locates and displays the target case.

Algorithm 5.1.1

Retrieve (leafnode in the errorhierarchy) Begin Counter 0 Number 0 Number = random ( leafnode -> number_edges

Algorithm 5.1.1 continued

38

While counter <> number Begin Counter counter + 1 Edge edge -> ptr_arc End Counter = 0 Number = random(edge ->number_cases) While counter <> number Begin Counter counter + 1 Case ptr_to_case En d Display the similar buggy case

The second algorithm that will be presented no longer utilized MEDD’s knowledge level error as the similarity metric in retrieving similar buggy cases, it utilized the number of goals and variables of the buggy program as the index in retrieving similar buggy cases.

Algorithm 5.2

Input

:File of buggy_structure :Case Library of buggy cases whose primary index is the number of goals and variables Similar buggy cases

Output : Similar buggy case While we have not read the entire file of buggy_structure Begin Read one buggy structure For each index_node Ɛ List_of_goals_and_variables If buggy_structure(no_variables,no_goals) = index_node(no_variables,no_goals) Then Display one buggy case from the linked list of cases Endif Endfor If buggy_structure(no_variables,no_goals) Ɇ index_node(no_variables,no_goals) Then Display that no similar buggy case can be found End while

39

This algorithm accepts 2 parameters, namely: a File of buggy_structure which is different from the file being used in building the case library and the Case Library whose primary index is that of number of goals and variables. The algorithm starts by reading each buggy_structure from the File of buggy_structure. Afterwards, it searches for the index_node that contains the same number of goals and variables with that of the buggy structure. If the search is successful, meaning there exists an index_node which contains the same number of goals and variables, it will then display one buggy case from the linked list of buggy cases. If after exhausting all the nodes in the List_of_goals_and_variables and there exists no node that contains the same number of goals and variables, it will display that no similar case can be found.

5.3 HEURISTIC

This research investigates the application of the heuristic and investigates the performance of the retrieval process, whether there is an improved performance if the heuristic is applied.

This heuristic arranged the subtrees in the Case Library according to the number of instances that they occur in the buggy cases while building the case library. Subtrees are reorganized from the most frequently occurring knowledge level errors to the least frequently occurring knowledge level errors, That is, the subtree with the most number of instances is placed at the leftmost part which is the first subtree and the subtree with least number of instances is placed at the rightmost part, the last subtree. Before proceeding with the retrieval process, the reorganized tree is traversed using inorder traversal and a temporary list is created again, in which one node in the list represents one knowledge level error. The computation of the similarity metric is done only for the first five knowledge level errors and one knowledge level error is randomly chosen form the remaining knowledge level errors. This is under the assumption that no buggy case can exhibit all the knowledge level errors in the error hierarchy all at the same time.

The heuristic being utilized involves the following:

An additional member which is no_of_instances in each node in the errorhierarchy is supplied so that in building the case library, these field is incremented each time a buggy program is
40

classified. This field indicates the number of times that a particular knowledge level error occur while building the case library.

The root of the error_hierarchy is then passed as a parameter to Algorithm 5.3.1 that sorts the subtrees according to the number of occurrences of knowledge level errors in descending order.

A new linear list is created again using Algorithm 4.2.2.1

The same algorithms are employed in retrieving similar buggy cases when the heuristic is applied, only the parameters will change and these are the newly created linear list and the sorted error hierarchy which is actually the case library.

41

Chapter 6

RESULTS AND DISCUSSION

This chapter analyzes the data structures of the case library and examines the performance of the algorithms in terms of their ability to organize the buggy cases. The suitability of the similarity metrics and the average running times in retrieving similar buggy cases were also analyzed and evaluated.

6.1 Data Structure

We wish to compare the three data structures of the case libraries in terms of their storage/memory requirements. This can be achieved by computing the number of bytes required for each data structure. Let us recall that the data structures for the case library are Linear_list and graphs of buggy cases, MEDD’s error hierarchy, graphs of buggy cases and List_of_goals_and_variables.

Table 6-1 shows that the total number of bytes required to utilized Linear_list is 138 bytes. The column labeled Member shows the declaration for each member of the structure. The declaration follows the syntax of declaring identifiers in C Language. The column labeled Data Type indicates the type of data for each member and the column labeled Length of Data type in bytes indicates the number of bytes required for each data type

42

MEMORY REQUIREMENT OF EACH STRUCTURE MEMBER OF LINEAR LIST

MEMBER

DATA TYPE

LENGTH OF DATA TYPE IN BYTES

NUMBER OF BYTES

char behavior[3][8] int counter char node_name[5] char parent_name[5] *next_childptr *left_ptr *ptr_arcnode

char Int Char Char Pointer Pointer Pointer

1 2 1 1 4 4 4 Total number of bytes Table 6-1 Memory requirement of Linear_list

114 2 5 5 4 4 4 138

. The column labeled Number of bytes indicates the required number of bytes for each member. The structure member components for the graphs of buggy cases are not shown in view of the fact that the number of bytes is the same for both data structure.

MEMORY REQUIREMENT OF EACH STRUCTURE MEMBER OF MEDD’S ERROR HIERARCHY

MEMBER Char behavior[3][38] Int counter Char node_name[5] Char parent_name[5] *parent_ptr *first_childptr *next_childptr *left_ptr *ptr_arcnode

DATA TYPE Char Int Char Char Pointer Pointer Pointer Pointer Pointer

LENGTH OF DATA TYPE IN BYTES 1 2 1 1 4 4 4 4 4

NUMBER OF BYTES 114 2 5 5 4 4 4 4 4

MEMORY REQUIREMENT OF EACH STRUCTURE MEMBER OF LINEAR_LIST

43

Char behavior[3][38] Int counter Char node_name[5] Char parent_name[5] *next_childptr *left_ptr *ptr_arcnode

Char Int Char Char Pointer Pointer Pointer

1 2 1 1 4 4 4

114 2 5 5 4 4 4

MEMBER REQUIREMENT OF EACH STRUCTURE MEMBER IN STACK error_node *next 4 4 Total number of bytes Table 6-2 Memory Requirement of MEDD’s Error Hieararchy Pointer Pointer 4 4 292

The next data structure which is MEDD’s Error Hieararchy utilized the data structures of MEDD’s Error Hierarchy,Linear_list and Stack. The Linear_list was utilized so that each subtree in the error hierarchy will not be traversed in computing the similarity metric. It will be traversed only if the similarity metric exceeds the system’s threshold. The traversal is non recursive and utilized a stack to traverse the tree. Table 6-2 shows the memory requirement for this data structure. It shows each member of the three data structures mentioned previously.

MEMORY REQUIREMENT OF EACH STRUCTURE MEMBER OF LIST_OF_GOALS_AND_VARIABLES

MEMBER int no_of_goals int no_of_variables *ptr_case

LENGTH OF DATA NUMBER OF BYTES TYPE IN BYTES int 2 2 int 2 2 pointer 4 4 Total number of bytes 8 Table 6-3 Memory requirement of List_of_goals_and_variables

DATA TYPE

The last data structure is List_of_goals_and_variables. Let us recall that this data structure utilizes the number of goals and variables as the primary index in organizing the buggy cases. Table 6-3 shows the memory requirement for this data structure.
44

The Tables show that MEDD’s Error Hierarchy requires 292 bytes, Linear_list requires 138 bytes and List_of_goals_and_variables requires only 8 bytes. The memory requirement of MEDD’s Error Hierarchy is significantly more than the memory requirement of Linear_list and List_of_goals_and_variables. The usage of the two data structures namely: Linear_list and Stack less than half of the memory requirement of MEDD’s Error Hierarchy.And

List_of_goals_and_variables requires the least number of bytes for memory space. The memory requirement for the three data structures are utilized in building the Case Library and in retrieving similar buggy cases. Although, List_of_goals_and_variables requires the least number of bytes, the ability of the algorithms to organized and to retrieved similar buggy cases should be taken into consideration before an inference can be made.

6.2 Organization of Cases

The data structures of the case libraries will not be discussed in relation with the organization of buggy cases since it will not affect the performance of the labeled edges in organizing the buggy cases. We wish to evaluate the ability of the three algorithms in organizing the buggy cases in the case library. The data set that was used in building the case library is the same data set used by (Sison, 1998) in building MEDD’s error hierarchy. There are 64 buggy naïve reverse/2 and 56 buggy sumlist/2 programs that were obtained. Each buggy case is evaluated if it is correctly indexed by all the knowledge level errors that it exhibits. Let us recall that the two algorithms are double indexed. They utilized MEDD’s knowledge level errors as the primary index in organizing the cases in the case library and the secondary indices being behavior level errors and number of goals and variables, respectively. The third algorithm is no longer doubly indexed. It utilized the number of goals and variables as the primary index in organizing the buggy cases.

The performance of the third algorithm to organize the buggy cases is not shown in view of the difficulty to classify these buggy cases according to types to knowledge level error. The buggy cases exhibit varied types of knowledge level errors per node. Figure 6-1 illustrates the difficulty
45

of classifying the buggy cases. In the Figure, it can be seen that the node whose number of goals and variables are equivalent to 4 and 3, respectively, consists of cases with different types of knowledge level error. Buggy program 1 exhibits a knowledge level error known as lack of background knowledge – termination. Buggy program 2 exhibits a different knowledge level error known as lack of task knowledge – reversing.
No. of goals 4 No of variables 4 No of goals 4 No of variables 4

reverse([H],[H]). reverse([H|T],R):reverse(T,R), append(R,[H],R).

Buggy program 1

reverse([H],[H]). reverse([H|T],R). reverse(T,RT), append(RT,[H],R). reverse([ ] ,[ ]). reverse([H|T],R). reverse(T,R1), append(H,R1,R).

Buggy program 3

reverse([ ],[ ]). reverse([H|T],R):reverse(T,R), append(H,R,R).

Buggy program 2

Buggy program 4

Figure 6-1

Illustrates the difficulty of classifying the buggy cases according to types of knowledge level errors

Similarly, the next node whose number of goals and variables are equivalent to 4 and 4 respectively contains buggy programs 3 and 4 which exhibit the same knowledge level errors with that of buggy programs 1 and 2 respectively. The arrangement of cases according to types of knowledge level error is cluttered to different nodes. This scenario is the reason why the cases are difficult to classify. Unlike the third algorithm, the two algorithms that employed MEDD’s knowledge level error as the primary index and whose secondary indices are number of goals and variables and behavior level errors were able to organized the buggy cases according to types of knowledge level error. Figure 6-2 illustrates how MEDD and number of goals and variables can organized the buggy cases using the same set of buggy programs in Figure 6-1.

46

Subtrees of Error Hierarchy replace(base,e1,[H]) replace(base,e2,X?) replace(subgoal1,T1,R) replace(subgoal2,T1,X?) Replace(subgoal2,T1,H) Replace(subgoal2,[H],X?)

X?=[H]

X? = H

X?=R

X? = H

X?= R

X? = T1

reverse([H],[H]). reverse([H|T],R). reverse(T,R), append(R,[H],R). Buggy program1

reverse([H],[H]). reverse([H|T],R):reverse(T,RT), append(RT,[H],R). Buggy program 2

reverse([ ],[ ]). reverse([H|T],R):reverse(T,R), append(H,R,R). Buggy program 3

reverse([ ], [ ]). reverse([H|T],R):reverse(T,R1), append(H,R1,R) Buggy program 4

Figure 6-3 Illustrates how MEDD and behavior level errors organized the buggy cases using the same set of buggy programs in Figure 6-1. Figures 6-4 to 6-6 shows the performance of the two algorithms in organizing the buggy cases with 3,2 and 1 knowledge level errors in the reverselist/2 dataset. The Figures also show the percentage of buggy cases with 3,2 and 1 knowledge level errors that were fully, partially and erroneously indexed in the reverselist/2 dataset.

47

100 90 80 70 60 50 40 30 20 10 0 fully indexed behavior level errors no of goals and behaviors 100 77 partially indexed 0 23

Figure 6-4 Performance of the secondary indices in organizing the buggy cases with 3 knowledge level errors in the reverselist/2 dataset

100 90 80 70 60 50 40 30 20 10 0 behavior level errors no of goals and variables fully indexed 100 60 partially indexed 0 32 erroneousl y indexed 0 8

Figure 6-5 Performance of the secondary indices in organizing the buggy cases with 2 knowledge level errors in the reverselist/2 dataset

48

100 90 80 70 60 50 40 30 20 10 0 fully indexed behavior level errors no of goals and variables 100 62 erroneously indexed 0 38

Figure 6-6 Performance of the Secondary indices in organizing the buggy cases with 1 knowledge level error in the reverselist/2 dataset

The column labeled Secondary indices indicates the index that was utilized in organizing/indexing the buggy cases. The values in number of goals and variables are averages of its performance accuracy rates given three random orderings of the dataset.

The Figures show that the usage of number of goals and variables as the secondary index has the propensity to partially indexed buggy cases with 3 and 2 knowledge level errors. About 23% of the buggy cases with 3 knowledge level errors and 32% of the buggy cases with 2 knowledge level errors are partially indexed. Partially indexed buggy cases are buggy cases with multiple knowledge level errors that are indexed by one knowledge level error. Figure 6-7 illustrates the reason behind this partial classification. In the Figure, Buggy program 3 contains two knowledge level errors, first replace(subgoal2,[H],H) and secondly replace (base,e1, and [H]) and replace (base, e2,[H]). It can be seen that buggy program 3 was not indexed by the second knowledge level error. The reason for this is that, when buggy program 3 was classified, the difference links labeled 4,4 which indicates the number of goals and variables for the 2 leaf nodes were already constructed and their edges point to Buggy program 1. Since algorithm 4.5.1.3 states that the buggy case will be inserted only on the first time that it will be classified, then buggy program 3

49

will no longer be inserted or appended at the end of the cases when it was classified on the second knowledge level error.

Replace(subgoal2,[H],H)

Replace(subgoal2, T1,H) Replace(subgoal2,[H],X?)

Replace(base,e1,[H]) Replace(base,e2,X?)

4,4 reverse([ ], [ ]). reverse([H|T],R):reverse(T,RT), append(RT,H,R) X? = R X? = T1 X? = [H] X? = H

Buggy program 2 reverse([H],[H]). reverse([H|T],R):reverse(T,R1), append(H,R1,R).

reverse([H],[H]). reverse([H|T],R):reverse(T,RM). append(RM,H,M).

Buggy program 1

Buggy program 3

Figure 6-7 Illustrates how a buggy program with 2 knowledge level errors (Buggy program 3) can be partially indexed Notice also that buggy cases with 1 knowledge level error were not partially classified. Partial classification will unlikely happen for this type of cases since the buggy cases will be appended in the case library.

The Figures also show that the usage of number of goals and variables as the secondary index has the tendency to erroneously index the buggy cases. The buggy cases are erroneously index to knowledge level errors when in fact these buggy programs do not exhibit, especially for buggy programs having one knowledge level error, they could be erroneously indexed by two or more knowledge level errors. About 8% of the buggy cases with 2 knowledge level errors and 38% of buggy cases with 1 knowledge level error is erroneously indexed. Buggy cases with 1 knowledge level error are most susceptible to multiple classification as evidenced by the percentage of

50

erroneously classified buggy cases. Notice that buggy cases with 1 knowledge level error obtained the largest percentage of buggy cases that are erroneously indexed.

Invalid append Remove(subgoal2)

lack of background knowledge - termination Replace(base,e1,[H]) Replace(base,e2,X?)

Replace(head,R,[X?| H]) X? = [H] Remove(subgoal1) X? = T1 Reverse([H],[H]). Reverse([H|T],[L |H]):Reverse(T,L). X? = T1 X?=H

Buggy program 1

Reverse([ ],[ ]). Reverse([H|T],[R,H]):Reverse(T,R).

Buggy program 2

Figure 6-8 Illustrates how a buggy program (Buggy program 2) with 1 knowledge level error can be erroneously indexed Figure 6-8 illustrates how a buggy program with 1 knowledge level error can be erroneously indexed. The Figure shows two buggy programs namely: Buggy program1 and Buggy program2. Buggy program 1 contains 2 knowledge level errors namely:invalid append and lack of background knowledge termination. Invalid append is a knowledge level error which is composed of behavior level errors remove(subgoal2) and replace(head,R, [T1| H]). The
51

knowledge level error lack of background knowledge termination is composed of behavior level errors replace(base,e1,[H]) and replace(base, e2,[H]). Buggy program 2 contains 1 knowledge level error known as invalid append which is composed of the behavior level errors mentioned previously. When buggy program 1 was classified into the Error Hierarchy, links or edges were created for each leaf node labeled 3,3 which represents the number of goals and variables. These edges point to Buggy program1.By the time that Buggy program 2 will be classified, it will be appended at the end of Buggy program 1 making it multiply/erroneously indexed by the knowledge level error known as lack of background knowledge termination.

These two unfavorable inclinations can be attributed to the ordering effects of buggy cases. Different orderings of the buggy cases may yield different classifications or classification hierarchies( Sison, 1998). The two major drawbacks were manifested also in the sumlist/2 dataset. Figures 6-9 to 6-11 shows the performance of the Secondary indices in organizing the buggy cases with 3,2, and 1 knowledge level errors in the sumlist/2 dataset.

100 90 80 70 60 50 40 30 20 10 0 fully indexed behavior level errors no of goals and variables 100 80 partially indexed 0 20

Figure 6-9 Performance of the Secondary indices in organizing the buggy cases with 3 knowledge level errors in the sumlist/2 dataset

52

100 90 80 70 60 50 40 30 20 10 0 behavior level errors no of goals and variables fully indexed 100 66 partially indexed 0 30 erroneousl y indexed 0 4

Figure 6-10 Performance of the Secondary indices in organizing the buggy cases with 2 knowledge level errors in the sumlist/2 dataset

100 90 80 70 60 50 40 30 20 10 0 fully indexed behavior level errors no of goals and variables 100 79 erroneously indexed 0 21

Figure 6-11 Performance of the Secondary indices in organizing the buggy cases with 1 knowledge level errors in the sumlist/2 dataset

53

The effect of number of goals and variables to partially index the buggy cases was manifested in buggy cases with 3 and 2 knowledge level errors. About 20% of the buggy cases with 3 knowledge level errors and 30% of buggy cases with 2 knowledge level errors are partially classified. Noticed also that buggy cases with 1 knowledge level error were not partially classified. This has been discussed earlier that partial classification will unlikely happen for this type of cases since the buggy cases will be appended in the case library.

In like manner, the effect of number of goals and variables to erroneously index the buggy cases were manifested in buggy cases with 2 and 1 knowledge level errors. About 4% of buggy cases with 3 knowledge level errors and 21% of buggy cases with 1 knowledge level error are erroneously indexed.

Similarly, buggy cases with 1 knowledge level error obtained the largest percentage of buggy cases that are erroneously indexed. This strengthens the observation that buggy cases with 1 knowledge level error are most susceptible to multiple classification as evidenced by the percentage of erroneously classified buggy cases in both datasets.

Results show that behavior level errors is more appropriate than number of goals and variables in organizing the buggy Prolog programs. Buggy cases with 3,2 and 1 knowledge level errors in the reverselist/2 and sumlist/2 dataset were correctly/fully indexed.

Despite of the disadvantages brought about by partial and erroneous classification of number of goals and variables, the first side effect that is partial classification is not detrimental in the retrieval process. For the reason that these cases will only be partially retrieved. The cases that were erroneously indexed are in fact the ones that will be detrimental in the retrieval process. However, the percentage of cases that are partially classified and erroneously indexed comprised only a small percentage and will not significantly affect the retrieval process. Perhaps the effect of these erroneously retrieved cases will be evident if the case library grow in size.

54

6.3 Suitability of Similarity Metrics

The data structures of the case libraries will not be discussed in relation with the retrieval of similar buggy cases since it will not influenced the ability of the similarity metrics in retrieving similar buggy programs.

We wish to evaluate the suitability of the similarity metrics in retrieving similar buggy Prolog programs. 63 sumlist/2 programs and 63 reverselist/2 programs were obtained. In the reverselist/2 dataset, 3 programs are correctly identified and 12 programs have unrecognizable intentions. In the sumlist/2 dataset, 21 programs are correctly identified and 12 programs have unrecognizable intentions. Buggy programs with unrecognizable intentions are programs whose intentions are difficult to understand. This came as no surprise, since these programs were written by student who were learning Prolog for the first time. The results also indicate that the sumlist/2 dataset is simpler than the reverselist/2 dataset as evidenced by the number of identified programs that are correct. The retrieved buggy reverselist/2 and sumlist/2 programs were then given to an expert and were asked to verify whether the retrieved programs are similar or not. Each similarity metric’s performance is examined from its ability to retrieve similar buggy programs (percentage of similar buggy programs retrieved.) We consider similarity search as follows: The same type and the same number of knowledge level errors. An example is shown in Figure 6-12a. The two buggy programs in the example exhibits the same type and the same number of knowledge level errors known as invalid append.

The same type but different number. Under this case, buggy programs that are retrieved are those that may differ in number of knowledge level errors. That is a buggy program exhibits the same knowledge level error under consideration but may still have another type of knowledge level error as shown in Figure 6-12 b. The buggy programs in the example exhibit the same

knowledge level error known as an invalid append, although the second buggy program still exhibits another knowledge level error known as lack of background knowledge – termination.
55

a)reverse([ ],[ ]). reverse([H|T],[R|H]):reverse(T,R).

a) reverse([ ],[ ]). reverse([H|T],[T|H]).

Invalid append

a)reverse([ ],[ ]). reverse([H|T],[R|H]):reverse(T,R).

a) reverse([ H],[H ]). reverse([H|T],[T|H]).

Lack of background Knowledge termination

Figure 6-12 Examples of Similar Buggy programs

Figures 6-13 to 6-15 show the performance of the similarity metrics in retrieving similar buggy cases in the reverselist/2 dataset. The column which is labeled with Organization/Similarity metric indicates the organization of the Case Library that was used during the retrieval process. The figures show the three organizations of the Case Library namely: MEDD and behavior level errors (MEDD and ble) and MEDD and number of goals and variables (MEDD and ngv) and number of goals and variable (ngv). The similarity metric indicates the mechanism that was used in retrieving similar buggy cases. We wish to evaluate also the two similarity metrics namely: MEDD’s student model(MEDD) and number of goals and variables (ngv). The column labeled fully retrieved indicates the percentage of buggy cases that were retrieved successfully, meaning it was able to retrieved similar buggy cases which contain all the knowledge level errors that a buggy program exhibits. The column labeled partially retrieved indicates the percentage of similar buggy cases that do not fully exhibit all the knowledge level errors that a buggy program may contain. These buggy cases may in fact exhibit a different knowledge level error under consideration. The column labeled dissimilar cases indicates the percentage of buggy cases that exhibit different knowledge level error(s). And the column labeled cases not found indicates the percentage of cases not found in the case library.

56

100 90 80 70 60 50 40 30 20 10 0 MEDD and ble MEDD and ngv ngv fully retrieved 100 100 12 partially retrieved 0 0 35 dissimilar cases 0 0 41 cases not found 0 0 12

Figure 6-13 Performance of the 2 similarity metrics in retrieving similar buggy cases with 3 knowledge level errors in the reverselist/2 dataset

100 80 60 40 20 0 MEDD and ble MEDD and ngv ngv fully retrieved 100 82 45 dissimilar cases 0 15 27 cases not found 0 0 27

Figure 6-14 Performance of the 2 similarity metrics in retrieving similar buggy cases with 2 knowledge level errors in the reverselist/2 dataset

57

100 90 80 70 60 50 40 30 20 10 0 MEDD and ble MEDD and ngv ngv fully retrieved 100 88 19 partially retrieved 0 12 31 dissimilar cases 0 0 38 cases not found 0 0 12

Figure 6-15 Performance of the 2 similarity metrics in retrieving similar buggy cases with 1 knowledge level error in the reverselist/2 dataset The Figures show that the usage of MEDD’s student model as the similarity metric was able to retrieved 100% of the required number of knowledge level errors in the buggy cases. This is for buggy cases 3,2 and 1 knowledge level errors. The obvious reason for its perfect performance is the inherent fact that the buggy cases are completely indexed and thus will yield the same result during the retrieval process. This also confirms (Veloso, 1994) statement which states that when the number of cases in the case library is small, it is feasible to guarantee a best match.

However, its performance deviate when it utilized the Case library whose indices is that of MEDD and number of goals and variables. About 88% are fully retrieved for buggy cases with 2 knowledge level errors and 82% are fully retrieved for buggy cases with 1 knowledge level error. The values indicate that it was not able to fully retrieved the required number of knowledge level errors in the buggy cases. This results to partial retrieval. About 12% are partially retrieved for buggy cases with 2 knowledge level errors and 18% are partially retrieved for buggy cases with 1 knowledge level error. Let us recall that partially retrieved cases are cases that do not fully exhibit all the knowledge level errors that a buggy program may contain. These buggy cases may in fact exhibit a different knowledge level error under consideration. This consequence can be attributed to the organization of cases in the case library. Let us recall that the secondary index
58

that is the number of goals and variables has two side effects namely: partial and erroneous classification. The cases that were partially retrieved are actually the cases that were erroneously indexed during the process of organizing them in the Case library.

But even though, it was not able to perfectly retrieve the required number of similar buggy cases, still, its performance does not lag behind its performance using the case library whose indices is that of MEDD and behavior level errors. This has been discussed earlier, that there will be no significant effect if the cases are partially classified and considering also that only a small percentage of the cases is multiply/erroneously indexed, the chances of retrieving those cases are minimal. Figures 6-16 to 6-18 show the performance of the two similarity metrics in retrieving similar buggy cases in the sumlist/2 dataset. The same findings were observed in the second dataset. The usage of MEDD’s student model as the similarity metric was able to retrieved 100% of the required number of knowledge level errors in the buggy cases with 3,2, and 1 knowledge level errors. Furthermore, its performance also deviate when it utilized the Case library whose indices is that of MEDD and number of goals and variables. About 33% are partially retrieved for buggy cases with 3 knowledge level errors and 55% are partially retrieved for buggy cases with 2 knowledge level errors.
100 90 80 70 60 50 40 30 20 10 0 fully retrieved partially retrieved MEDD and ble 100 0 MEDD and ngv 100 0 ngv 67 33

Figure 6-16 Performance of the Secondary indices in organizing the buggy cases with 3 knowledge level errors in the sumlist/2 dataset

59

100 90 80 70 60 50 40 30 20 10 0 fully retrieved partially retrieved MEDD and ble 100 0 MEDD and ngv 94 6 ngv 45 55

Figure 6-17 Performance of the Secondary indices in organizing the buggy cases with 2 knowledge level errors in the sumlist/2 dataset

100 90 80 70 60 50 40 30 20 10 0 fully retrieved MEDD and ble MEDD and ngv ngv 100 83 50 dissimilar cases 0 7 25 cases not found 0 0 25

Figure 6-18 Performance of the Secondary indices in organizing the buggy cases with 1 knowledge level error in the sumlist/2 dataset Results also show that the usage of MEDD’s student model as the similarity metric and whose organization of Case libraries are behavior level errors and number of goals and variables outperformed the usage of number of goals and variables as the similarity metric in retrieving
60

similar buggy cases. This came as no surprise, since the organization of the cases whose primary index is that of number of goals and variables is in disarray. This confirms (Weber, 1996) statement that only cases that are stored and indexed appropriately can be retrieved successfully. The usage of number of goals and variables as the similarity metric fared poorly in almost all types of cases, except for sumlist/2 dataset where 67% of cases were correctly retrieved for cases having 3 knowledge level errors.

Noticed also that the usage of number of goals and variables as the similarity metric , yield a greater percentage of partially retrieved cases for all types of buggy cases in the reverselist/2 and in the sumlist/2 dataset. This can be attributed to the organization of the cases in the case library. Let us recall that these cases utilized the number of goals and variables as the primary index and the evaluation of the organization of these cases cannot be determined because the arrangement of buggy cases according to types of knowledge level error was cluttered to different nodes.

6.4 CPU cycles of Retrieval Algorithms

We wish to evaluate the performance of the retrieval algorithms in terms of its ability to retrieve similar buggy cases with respect to time. Each performance can be measured in terms of its usage of its case library, specifically its data structure and the similarity metric it utilized in retrieving similar buggy cases. Only the major steps/phases in the Retrieval Algorithms were shown since the goal of this research is not to obtain a detail and thorough analysis of their running time. But, only to obtain an approximate measure that will more or less predict how would they behave in retrieving similar buggy cases. Tables 6-3 to 6-5 show the major steps involved in retrieving similar buggy cases for each data structure and the corresponding similarity metric being utilized in the retrieval process. Let us recall that three data structures were utilized in building the case library namely: MEDD’s Error Hierarchy, Linear_list and List_of_goals_and_variables. Furthermore, 2 similarity metrics were utilized namely: MEDD’s knowledge level error and number of goals and variables. The secondary indices that were used for each data structure will not be discussed in relation with the retrieval algorithms because they do not have a vital role in determining the running time of each data structure in retrieving similar buggy cases.
61

STEP NUMBER

PROCEDURE
Perform sequential search to look for the corresponding knowledge level error

FORMULA

DESCRIPTION OF FORMULA
(Hale, 1985), where t=unit of time k=is some constant based on the internal speed of the machine n=number of elements in the list (knuth,1973), where t=unit of time n=number of nodes in the tree a=is the number of nodes with no right subtree (Hale,1985), where t=unit of time k=is some constant based on the internal speed of the machine n=number of elements in the list

t =k*n

1

2

Utilize a stack to traverse the corresponding leaf node using inorder traversal

t=15n+a+4

3

Perform sequential search to look for the randomly chosen buggy case from the list of cases

t=k*n

Average running time

2*(k*n) +15n+a+4

Table 6-4 Formula for computing the average running time of retrieving similar buggy cases using MEDD’s Error Hierarchy and MEDD’s knowledge level error
STEP NUMBER PROCEDURE FORMULA DESCRIPTION OF FORMULA
1 Perform sequential search to look for the corresponding knowledge level error t=k*n (Hale,1985), where t=unit of time k=is some constant based on the internal speed of the machine n=number of elements in the list (Hale,1985), where t=unit of time k=is some constant based on the internal speed of the machine n=number of elements in the list

2

Perform sequential search to look for the randomly chosen buggy case from the list of cases

t=k*n

Average running time

2*(k*n) Table 6-5

Formula for computing the average running time of retrieving similar buggy cases using Linear_list and MEDD’s knowledge level error

62

STEP NUMBER

PROCEDURE

FORMULA

DESCRIPTION OF FORMULA

1

Perform sequential search to look for the corresponding number of goals and variables

t=k*n

2

Perform sequential search to look for the randomly chosen buggy case from the list of cases

t=k*n

(Hale,1985), where t=unit of time k=is some constant based on the internal speed of the machine n=number of elements in the list (Hale,1985), where t=unit of time k=is some constant based on the internal speed of the machine n=number of elements in the list

2*(k*n) Table 6-6 Formula for computing the average running time of retrieving similar buggy cases using List_of_goals_and_variables and number of goals and variables To compare the major steps involved in retrieving similar buggy cases, this research adopted (Knuth, 1973) and (Hale, 1985) formulas for Linear search and inorder traversal. (Hale, 1985) states that in sequential search, the worst case ( in which the item is at the end of the list or not in the list at all) requires looking at all n times of the list in order to find the desired item or to determine that the item is not in the list. The relationship between t which the computer time and and n which is the number of items in the list is given by t=k *n, where k is some constant based on the internal speed of the machine performing the search and the number of machine instructions to look at the item of the list. Hale clarifies that the formula is not used to compute the exact timing of a search method because the value of the constant k is difficult to obtain.

Average running time

(Knuth, 1973) states that the computing time for traversing a binary tree in inorder that utilized an auxiliary stack is given by t=15n+a+4. Where n is the number of nodes in the tree, and a is the number of terminal right links. The quantity a can be as low as 1, assuming that n ≠ 0 and it can be as high as n; and if left and right are symmetrical, the average value of a is (n+1)/2

The Linear_list and List_of_goals_and_variable basically have the same number of major steps to retrieve a similar buggy case. A sequential search that will look for the corresponding knowledge level errors /number of goals and variables. And another sequential search that will
63

look for the randomly chosen buggy case. Thus, both of them require two sequential searches. The computing time is equivalent to 2* (k*n) where 2 is the number of times that a sequential search is performed and k*n which is given previously as the formula for sequential search.MEDD’s error hierarchy requires 2*(k*n) +15n+a+4, since it also performed 2 sequential searches and utilized a stack to traverse the error hierarchy. Table 6-4 shows that the computing time of utilizing MEDD’s error hierarchy is greater than the two data structures, which are Linear_list and List_of_goals_and_variables. An additional time is spent in traversing the corresponding subtree of the error hierarchy. This can be verified by showing the average running times for each case library and the similarity metric being utilized. Figures 6-19 to 6-20 shows the average running times for each case library and the similarity metric being used in retrieving similar buggy cases for buggy cases with 3, 2 and 1 knowledge level errors in the reverselist/2 and sumlist/2 dataset. The average running time is expressed in milliseconds. Noticed that the case libraries with their corresponding secondary indices (ble) behavior level error and (ngv) number of goals and variables were also included in the Figures. Although they were not included in Tables 6-4 to 6-6-, this research considered their performance and observed if there was a significant difference between the two secondary indices.

25 20 15 10 5 0 3 knowledge level errors 24 25 20 22 19 2 knowledge level errors 23 23 19 20 19 1 knowledge level errors 25 22 18 17 17

MEDD and ble/MEDD MEDD and ngv/MEDD Linear_list and ble/MEDD Linear_list and ngv/MEDD List_of_goals_and_variables/ngv

Figure 6-19 Average running times of retrieving similar buggy cases with 3,2 and 1 knowledge level errors in the reverselist/2 dataset

64

25 20 15 10 5 0 3 knowledge level errors 24 24 19 20 19 2 knowledge level errors 23 23 19 19 19 1 knowledge level errors 22 22 20 19 19

MEDD and ble/MEDD MEDD and ngv/MEDD Linear_list and ble/MEDD Linear_list and ngv/MEDD List_of_goals_and_variables/ngv

Figure 6-20 Average running times of retrieving similar buggy cases with 3,2 and 1 knowledge level errors in the sumlist/2 dataset

The following results were obtained from retrieving similar buggy cases with 3 knowledge level errors: MEDD’s Error hierarchy takes 24 to 25 milliseconds, Linear_list takes 20 to 22 milliseconds while List_of_goals_and_variables takes 19 milliseconds. The values indicate that List_of_goals_and_variables is the fastest, followed by Linear_list and MEDD’s error hierarchy being the slowest. This observation is also evident for buggy cases with 2 and 1 knowledge level errors in the reverselist/2 and sumlist/2 dataset. Furthermore, the selection of the secondary indices has no significant effect because their average running time is comparable.

Noticed that the average running time for the Linear_list and List_of_goals_and_variables is comparable in the sumlist/2 dataset although Linear_list was observed to be a little longer than the List_of_goals_and_variables in the reverselist/2 dataset. The fact that execution time was expended for the choice of the random edge before a buggy case can be retrieved can be the rationale for this. The number of edges that maintain the buggy cases in the Linear_list for the sumlist/2 dataset could be small compared with the reverselist/2 dataset and possibly the reason why the observation did not manifest in the sumlist/2 dataset.

65

The average running times assert the fact that the computing time of MEDD’s error hierarchy is longer that Linear_list and List_of_goals_and_variables 6.5 Heuristic We wish to evaluate the performance of the heuristic in the retrieval process. Figures 6-21 to 626 show the performance of the heuristic for buggy cases with 3,2 and 1 knowledge level errors in the reverselist/2 and sumlist/2 dataset.

The Figures show that the required number of retrieved similar buggy cases has improved when the heuristic was applied in the retrieval process.This applies to all buggy cases with 3,2 and 1 knowledge level errors in the reverselist/2 and sumlist/2 dataset.

90 80 70 60 50 40 30 20 10 0 with heuristic w/o heuristic fully retrieved 17 12 partially retrieved 83 88

Figure 6-21 Retrieving Similar buggy cases with 3 knowledge level errors in the reverselist/2 dataset

66

60 50 40 30 20 10 0 with heuristic w/o heuristic fully retrieved 56 50 partially retrieved 44 50

Figure 6-22 Retrieving Similar buggy cases with 2 knowledge level errors in the reverselist/2 dataset.

70 60 50 40 30 20 10 0 with heuristic w/o heuristic fully retrieved 64 54 partially retrieved 36 45

Figure 6-23 Retrieving Similar buggy cases with 1 knowledge level errors in the reverselist/2 dataset.

67

50 45 40 35 30 25 20 15 10 5 0 with heuristic w/o heuristic fully retrieved 50 50 partially retrieved 50 50

Figure 6-24 Retrieving Similar buggy cases with 3 knowledge level errors in the sumlist/2 dataset. There are 15 appended nodes in the reverselist/2 error hierarchy and 4 appended nodes in the sumlist/2 error hierarchy. The number of appended nodes in the reverselist/2 is thrice as many as the number of appended nodes in the sumlist/2. This presents an affirmation of the fact that the sumlist/2 dataset is simpler than reverselist/2 dataset.

68

Chapter 7

CONCLUSION AND RECOMMENDATIONS

7.1 Conclusion The linear list representation of MEDD’s knowledge level error is an appropriate data structure for the case library construction. It is also an appropriate data structure in retrieving similar buggy cases. The selection of the data structure was based on the memory requirement each structure needs to utilize in building the case library and in the average CPU cycles/running time each data structure needs to employ in retrieving similar buggy cases. It has been shown that the linear list representation of MEDD’s knowledge level error has the smallest memory requirement and has the optimal running time of retrieving similar buggy cases. Although, its running time is comparable with List_of_goals_and_variables and its memory requirement is greater than, its percentage of correctly indexed and correctly retrieved cases is superior to

List_of_goals_and_variables. The Linear_list can be used in a straightforward manner compared with the error hierarchy that needs a stack inorder to traverse the error hierarchy. However, if MEDD will be performed to the newly appended nodes in the error hierarchy then it is necessary to utilize the error hierarchy.

The edges labeled with behavior level errors are more appropriate in organizing the buggy cases. The selection was based on the percentages of fully indexed cases and correctly retrieved cases. It has been shown that the buggy cases were correctly indexed in all of the knowledge level errors that they exhibit and were able to retrieved the similar buggy cases with the required number of knowledge level errors. The edges labeled with number of goals and variables yield partial and multiple classification of buggy cases. Although partial classification of buggy cases is not detrimental in retrieving similar buggy cases, the purpose of classifying the buggy cases into all of their correct indices was not achieved. We have shown that MEDD’s student model is a suitable similarity metric in retrieving similar buggy programs in the reverselist/2 and sumlist/2 dataset. This affirms that student model based
69

diagnosis is essential in an Intelligent Tutoring System. The student model is necessary for effective and efficient instruction. This confirms (VanLehn, 1991) claim that without a student model, an ITS would be like a human tutor who knows nothing but the individual learner, and therefore is unable to adjust instruction to changes in the learner’s behavior. The usage of student model will direct the course of the tutoring process and provide appropriate remediation.

The heuristic can be utilize to determine the most commonly occurring knowledge level errors of the students and thus can be used as a basis for instructors who were teaching Prolog , the proper concepts to emphasize during classroom discussion in order to improve the knowledge of the students.

7.2 Recommendations

Despite this research has presented, a number of improvements can be made.

One possible improvement is to store the case library in a physical storage since the whole structure of the case library is stored only in memory. The manner of storing the buggy cases can be analyzed in such a way that they will not be classified again according to knowledge level errors that they exhibit when rebuilding the case library. This can be done by assigning parent names to the buggy cases so that pointers to buggy cases can be automatically constructed from their parent nodes.

It is also possible to perform MEDD in the newly appended nodes. This will determine if there are still existing unknown knowledge level errors that have to be identified. When this occurs, then the error hierarchy should be utilized since the causal relationship for each node is necessary to form knowledge level error.

70

71

Similar Documents