Free Essay

Interbot

In:

Submitted By reyesgabrielle
Words 6800
Pages 28
Interbot: A Resume Based Employment Interview Chatbot
Using an Enhanced Example Based Dialog Model

Andrea May G. Aquino
Department of Computer Science
University of Santo Tomas
Espana, Manila, 1008, PH andreamayaquino@gmail.com Katherine May Ann R. Bayona
Department of Computer Science
University of Santo Tomas
Espana, Manila, 1008, PH kmarbayona@gmail.com Kimberly Ann D.R. Gonzales
Department of Computer Science
University of Santo Tomas
Espana, Manila, 1008, PH kimberlyanngonzales @yahoo.com

Gabrielle Ann D. Reyes
Department of Computer Science
University of Santo Tomas
Espana, Manila, 1008, PH gabrielleannreyes@gmail.com Ria A. Sagum
Department of Computer Science
University of Santo Tomas
Espana, Manila, 1008, PH riasagum31@yahoo.com ABSTRACT
Traditional resume based recruitment interviews conducted by Human Resources (HR) specialists are time-consuming and costly. In-person interviews only allow companies to handle only a limited number of job applicants at a time. Also, there is no centralized database for resume storage and retrieval. As a result, a substantial amount of time and money is misdirected on interviewing unqualified job applicants. The proponents developed a resume based employment interview chatbot, using an enhanced example based dialog model, to evaluate job applicants’ consistency in their resume details and interview answers. The chatbot will replace the HR interviewer while maintaining the fundamental quality and naturalness of a resume based interview. The study achieved a 0.98 accuracy rate in comparing and scoring the applicant’s answers. It received a neutral rating for the dialog naturalness. The respondents of the study agreed on the system’s task success and usability. The study aimed to improve the current hiring process, specifically the initial resume based interview conducted during job applicants’ screening. Furthermore, the study utilized the potential of chatbots to further improve their simulation of intelligent conversations by having an in-depth analysis on the content and meaning of the user’s input.
Categories and Subject Descriptors
I.2.1 [Artificial Intelligence]: Applications and Expert Systems
General Terms
Algorithms, Languages,
Keywords
Chatbots, Dialog System, Expert System, Example-Based Dialog Model, Human Resources, Initial Interview
INTRODUCTION
One of the main responsibilities of Human Resources (HR) management is the task of recruiting, screening, interviewing and selecting of a qualified person for a certain job and it has changed dramatically in recent years as software applications has replaced most recruiters and job interviewers. Manual or paper-based hiring processes are rapidly losing ground for the role of computerized and automated hiring systems is expanding. Agencies do not simply use them as “electronic filing cabinets,” but also use them to assess applicants—to make substantive decisions about applicants’ qualifications and to make distinctions among applicants. For these reasons, it is in agencies’ and the public’s interest to ensure that the computerized and automated hiring systems are at least as effective as the paper-based hiring systems they replace. An example of which are online job interviews. It is a simple and effective way to interview candidates for employment because it saves employers money for they do not have to pay for a job fair or for candidates to travel to the office. It also saves on travel time and can be less stressful than interviewing in-person because you can prepare in advance for the fixed and generalized questions. There are a variety of types of online job interviews, but the most typical is the interview via webcam. Rather than having the applicant travel to an office, the interviewer will simply conduct the interview via webcam. Meanwhile, some employers use online web-based systems for interviewing. These are automated virtual interview software to help HR departments streamline their hiring by offering a combination of video tools to reduce time spent on hiring, increase the quality of candidates, give employers a branded hiring portal and organize all applicant activities in a web based applicant tracking system. They offer a branded online video interview solution that consists of companies’ pre-recorded interview questions that candidates respond to by either video response or text based response.
On the other hand, a dialog system or conversational agent is one of the many information technology developments today that enables computers to converse and interact with human in a coherent manner. Chatbots like Cleverbot or Simsimi, automated tutorials and online assistants are some examples of dialog systems used today. Other examples are responding to customers' questions about products and services via a company’s website or intranet portal and personalized conversational agents that can control internal and external databases to personalize interactions, such as answering questions about account balances, providing portfolio information, delivering frequent flier or membership information.
As dialog systems support a broad range of applications in business enterprises, education healthcare and entertainment, the study incorporated a chatbot to a dialog system to automate resume based employment interviews. The researchers chose this type of employment interviews because the user answers can be verified via their resume. It is also the type of interview that merely pre-screens candidates but take most of the significant time and money of the companies. The study developed an efficient way to fast track the recruitment process by substituting the in-person resume based interview with a chatbot that then provides a score of the applicant based on his/her answers at the end of the process. The study aimed to present a new approach for the development of computerized and automated dialog systems and also of automated interviewing and hiring software systems as the study integrates the two. The Interbot is a conversational agent that conducts job interviews in which a potential employee is screened through an interview by the system for a prospective employment.
BACKGROUND OF THE STUDY
At present, the hiring process consists of screening and selection processes. After the review of application forms and resumes, an initial interview is conducted by the HR Department. This one-on-one interview mainly discuss about the applicant’s curriculum vitae. In this era of modern and sophisticated technology, more companies are using automated hiring systems. Their primary reason for automating the hiring process is to achieve faster hiring. Companies intend to reduce the time to hire by announcing jobs and screening, ranking and referring candidates more quickly. Other reasons were to reduce workload and increase efficiency (McPhie 2004).
At present, dialog systems are utilized to improve a certain field. In the field of education, dialog systems are being used as intelligent instructional software. The Geometry Explanation Tutor is a dialog system that helps students state general explanation of their problem-solving steps. A limitation of the system is that it teaches at “the problem-solving level”, meaning that they provide assistance in the context of problem solving, but engage students only indirectly in thinking about the reasons behind the solution steps. Another limitation is that the system does not maintain a dialog history. It also lacks a dialog planning mechanism. There are also telephone-based spoken dialog systems (Aleven, Popescu, Koedinger 2001). An example is the Conference Room Reservation System (CRRS). It allows users to reserve or cancel rooms by simply stating their constraints in a natural way. The system prompts for missing information and offers alternative solutions if the original constraints cannot be satisfied. It is quite effective in enabling users to reach their goals but it lacks language processing techniques in order to conduct a well-progressing conversation (Pateras, Chapados, Kwan, Lavoie, Tremblay, 1999).
Chatbots are also used in some dialog systems. Various studies show that chatbots can be effective in supporting interactive QA. ELIZA, one of the pioneer chatbots, was created to emulate a psychotherapist in clinical treatment. It is simple and based on keyword matching. The input is inspected for the presence of a keyword. If such a word is found, the sentence is mapped according to a rule associated with the keyword; if not, a connected free remark, or under certain conditions an earlier transformation, retrieved. The A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) chatbot is one of the strongest of its type and has won the Loebner Prize, awarded to accomplished humanoid, talking robots. There is a developed system called Philippine Land Law Expert (PHILEX) chatbot focused on giving answers to the users who are in need of solutions to their problem or assistance in regards to property land laws and rights. It can be used and of great help to the lawyers in assisting their clients. In addition, those who seek counseling regarding property land laws and land rights but are not intended in substituting the professional practitioners can also utilize it.
Currently, increasing number of companies is using decision support systems as a way to partially automate and computerize the hiring process. These systems administer questionnaires online or at a kiosk in the personnel office, combine the answers with a digital resume, and make a decision based on present parameters. An example is oDesk is an online workplace that enables businesses to find, hire, manage, and pay talented independent professionals via the Internet. Businesses can hire these online contract workers for any type of work that can be done in front of a computer — from every tech skill imaginable to project management, customer support, marketing, design and even legal services. (oDesk | CrunchBase Profile, 2007). Another example is Interview Coordinator, which is designed to make recruitment campaigns easier and more efficient. It is an intuitive, low cost, pay as you use software system available through any browser. It is a complete interview management and applicant tracking solution aimed at the modern business. It will reduce administration, eliminate duplication of effort and reduce the work involved in selecting new talent. The Interview Coordinator also helps make better hiring decisions using integrated video interview technology. By using video to connect with and evaluate candidates, hiring managers and other campaign contributors will save time, greatly reduce travel overheads and eliminate scheduling limitations. Another example is Net-Interview offered by Advantage Hiring. It is an electronic screening tool that becomes part of an electronic job posting. Candidates’ answers are automatically stored in a database, scored and ranked, and compared against the requirements the hiring manager has established. A limitation of present hiring system is that it will still require human intervention as it only provides minimal upgrade to the standard hiring process. It generates fixed questions or fields thus restricting the extraction of other essential information from the job applicant aside from the ones pre-determined (ITWorld, 2011).
The researchers plan to utilize the concept of chatbots and dialog systems by applying different methods and approaches to effectively control the interview and elicit answers from the job applicant, while simulating an intelligent conversation with a human.
INTERBOT
Interbot: Resume-based employment hiring dialog system was evaluated in terms of its accuracy in comparing and scoring the similarity of the user’s resume details to the user’s answers derived from the dialog or the conducted interview. The Interbot was also evaluated in terms of its naturalness, task success and system usability. Five human resources experts used and evaluated a dialog produced by the system. Each of them was given questionnaires to evaluate the accuracy of the scoring and comparing, the dialog naturalness, task success and system usability.
The example-based dialog model is the basis and model followed by the Interbot: Resume-based employment hiring dialog system. It was introduced by Lee et.al in their study “Example-based dialog modeling for practical multi-domain dialog system” for deploying data-driven dialog systems. Its main idea is that a dialog manager uses dialog examples that are semantically indexed to a database, instead of domain-specific rules or probabilistic models for dialog management. The example-based dialog model methodology by Lee et.al presents a goal-oriented dialog system in a single domain as it explores a generic dialog-modeling framework for managing multi-domain goal-oriented dialogs and chat dialogs in the same framework. The group decided to use or follow the example-based dialog model in the Interbot: Resume-based employment hiring dialog system because of its ability to handle and manage different dialog topics or domains especially goal-oriented dialogs and dialogs with slot-filling tasks. The basic idea of our approach is that a dialog manager (DM) uses dialog examples that are indexed to a database according to the dialog state and other example-based elements, instead of domain-specific rules or probabilistic models for dialog management. The following are the components of the Example-based dialog model used in the study: a) Elements
The elements used in the Example-based dialog model followed in the Interbot: Resume-based employment hiring dialog system are stated and explained in Table 3-1. Some of these elements were derived from the original Example-based dialog model and some were modified to fit and satisfy the task of the Interbot dialog system.
Table 31 Example-based dialog model elements Elements | Explanation | Example/s | Domain | - This is the dialog topic or genre. It can also pertain to a certain slot-filling task. | School, Work | Dialog History Vector(DHV) | - This is the representation of the slots/sub-topics of a Domain/Topic. - In a form of Vector and contains 0 and 1s. - All Slots are initially marked, as 0 then will be changed to 1 if a slot/sub-topic is done/finished. - Represents the dialog state | [0,0,0,0,0][1,0,1,0,1][1,1,1,1,1] | Vague or Specific(VS) | - This classifies if the initial dialog state, in all 0s ([0,0,0,0]) pertains to the domain/whole topic (General/Vague) or to the first slot/sub-topic (Specific). | V, S | Slot | - This is the slot/sub-topic to be discussed/processed- When VS is V, then Slot is blank/null | Level, School Name, “ “ | Example | This is the question template sentence. | What is your School? |

b) Example Database The database is pre-structured in a way that there are 4 or more example questions/templates per each combination of Domain, DHV and VS. These 4 or more example questions/template signifies the different ways a question can be asked in the real world. This strategy also helps in other cases of user answers, like situations when the applicant cannot understand the question, because these different example questions can already be the clarification/rephrased question that may help the applicant understand the system’s intended meaning. c) Processes
There are 2 processes from the original Example-based dialog model that the group implemented in the Interbot dialog system, the example search and the example selection. The example search method involves the searching of example questions/templates given the current dialog state, while the example selection involves selecting one example question to be used from the searched example questions. The example search and example selection from the original Example-based were modified to fit and satisfy the task of the Interbot dialog system. The example search and example selection methods used in the Interbot dialog system will further be explained in the Description of Modules & Interfaces.

d) The Interbot Example-Based Dialog Model
The Interbot Example-Based Dialog Model was modeled to ask significant resume details that are basically the topic of discussion of an initial interview. The initial interview should discuss 5 main topics/domain consecutively and each of their sub-topics or slots. All the domains and the slots are shown in Table 3-2. The dialog manager or the whole process ends when all the sub-topics/slots are asked, answered and compared to the resume.

Table 32 Domains and Slots Domain/Main Topic | Kind of Task | Slots/Sub-topics involved | Highest Level of Educational Attainment/ Qualification (HLQ) | Slot-filling | Level, Course, School, Graduation Year and Month | Second Highest Level of Educational Attainment/ Qualification (SLQ) | Slot-filling | Level, School, Graduation Year and Month | School Experience | Non-slot-filling | Projects, Accomplishments | Latest Work Details | Slot-filling | Company Name, Job Title, Years worked for the company | Latest Work Experience | Non-slot-filling | Work Description, Project and Accomplishments, Skills Acquired, Reason for Leaving the company |

Pre-processing

Figure 31 Pre-processing
The Interbot example-based dialog system starts with getting the applicant’s resume details. The Applicant is asked to input his/her resume details in an electronic form first before proceeding to the initial chat interview. The resume details will then be placed to the resume database, to be accessed during the initial interview as a comparison basis.
Example Search and Example Selection
Figure 32 Example Search and Example Selection a) Example Search Method
The example search method involves the searching of example questions/templates in the Examples Database, given the current dialog state (Domain, DHV and VS). The results are set to a vector list. This signifies moving to the next question or sub-topic when called/perform. b) Example Selection Method
The example selection method involves randomly choosing a single question from the vector list of the searched example questions. The chosen question will then be deleted from the vector list to minimize reiterations, then will be printed to the dialog system interface. This method is executed right after an example search to set the question to be asked and this signifies a clarification of the previous question when called/executed alone or for the 2nd/3rd/4th time.
The Evaluation of the User’s Answer

Figure 33 Evaluation of the User’s Answer
The evaluation of the user’s answers focuses on the comparing of the user’s chat answers to user’s resume details, but the processes of the evaluation still depends on the kind of task of the current domain. As explained in the Presentation of Solutions, there are 2 kinds of tasks, the Slot-filling task and the Non-slot-filling task. These 2 tasks have different processes to undergo to evaluate the User’s Answer. The Slot-filling task processes first the user’s answer using the Named Entity Recognizer tool to extract the details from the user’s answers, then direct comparing is done with its corresponding resume details. Meanwhile, the Non-slot-filling task immediately compares the user’s answer to the resume detail using Sentence Similarity. These tasks are further explained below and also other scenarios and cases. The evaluation results are then passed to the next module, the Scoring of the user’s answer. a) Slot-filling Task
The Highest level of qualification (HLQ), Second highest level of qualification (SLQ), and Latest work details domains involves a slot-filling task process as shown in Table 3-2. The slot-filling task accomplishes/process each slot in any order depending on the situation and 1 or more slots can be accomplished at a time. The process of the slot-filling task differs with the process of the non-slot-filling task. It involves a Named Entity Recognizer to understand the different terms specifically proper nouns like School Level, Course or School Name to be compared with its corresponding resume field. The Named Entity Recognizer from Stanford was modeled to understand and recognize 9 tags namely Person, Level, Course, School, Month, Year, Company Name, Job Title and Years. Since the recognized entities are proper nouns or words that are named, the comparison of each recognized entity with its corresponding resume detail is done through exact or direct comparing. Meanwhile, there are recognized entities like Educational Level that are ambiguous or words that may have different words but have the same meaning. For example is “Bachelor’s Degree” and “College Degree”, they both pertain to one thing, so for these cases, entities like this are compared to its corresponding resume detail through sentence similarity. The sentence similarity tool measures the semantic similarity of two sentences or group of words, so it takes into consideration the meaning of the words/sentences through referencing ontologies using WordNet. The tool outputs a similarity percentage and if the compared words/sentences has more than 60% similarity score, then it is considered equal. In cases of no entities were recognized in the user’s answer or mismatch of entity and its corresponding resume detail, a strategy for these situations are performed by the dialog manager and is explained in Other Cases (c), specifically in (c.3) and (c.4). b) Non-slot-filling Task
The School Experience and Latest work experience domains involves a non-slot-filling task process as shown in Table 3-2. The non-slot-filling task accomplishes/process its slots in a particular order as it is programmed, and at one at a time way. Since the slots/sub-topics of these domains like work description, school projects requires answers in descriptive and complex sentences, the user answer is directly compared with the corresponding resume detail/answer of these slots, which are in phrases/ sentences form too, using Sentence Similarity. The sentence similarity tool measures the semantic similarity of two sentences or group of words, so it takes into consideration the meaning of the words/sentences through referencing ontologies using WordNet. The tool outputs a similarity percentage and if the compared phrases/sentences has more than 60% similarity score, then it is considered equal. In cases of similarity mismatch of the user answer to its corresponding resume detail, another strategy is performed by the dialog manager and is explained in Other Cases (c), specifically in (c.5). c) Other Cases
The dialog manager handles the different situations explained in this part. The dialog manager sets an error trial count for these cases or situations that may occur to the dialog. The error trial count is set to 1 at initialization of each slot to be asked/discussed. If one of these cases or situations happens per slot, this error trial count is incremented. When the error trial count becomes 3, the dialog manager moves to the next slot to continue to the dialog transition and avoid focusing on a single slot/task.
c.1) User asks a clarification question The dialog manager takes into consideration situations wherein the user did not understand the question asked to him/her by the system, and asks for a clarification question like “What is highest level of qualification?” In situations like this, the dialog manager just selects another question from the list of the searched example questions (example selection) to ask the user. The dialog manager determines if the user’s answer is a question if his/her answer ends in a question mark (?). Meanwhile, the dialog manager determines if the user’s question is a clarification question by comparing the user’s question to the previously asked question by the system. c.2) User asks an off-topic question In situations wherein a user/applicant asks an off-topic question like “Why are you asking me this question?” the dialog manager also selects another question from the list of the searched example questions (example selection) to ask the user, but an additional clarification sentence is added before the question. ([Additional Clarification Sentence] + [Rephrased Question] ) The question that will be thrown back to the user will be like “I’m sorry but I cannot answer your question. Let’s please just focus on the topic. Again, Tell me about you highest level of attainment” c.3) No understood / recognized entity from the user’s answer (For slot-filling tasks) The system will definitely not understand or recognized any entity from the user’s answer if there are wrong grammars, capitalization and spelling on the sentences/words. So in cases this happens, or the system’s named entity recognizer is the one that really failed, the dialog manager compares all the resume details of the current topic to the user answer. Each of these resume detail is searched in the user’s answer. If 1 or more resume detail is found, set that resume answer as a recognized entity, then repeat/execute the Slot Filling Task Method again (2.a) without performing or executing the named entity recognizer tool so that the newly acquired recognized entities may be compared to the resume. If there are no resume details found on the user answer, then the dialog manager rephrases the previous question by selecting another question from the list of the searched example questions (example selection) to ask the user, and an additional clarification sentence is added before the question. ([Additional Clarification Sentence] + [Rephrased Question]) The question that will be thrown back to the user will be like “Can you please rephrase your answer? Again, Tell me when did you obtained your Bachelor’s degree.” c.4) Recognized entity is not equal/similar to its corresponding resume detail (For slot-filling tasks) Slots in the slot-filling tasks are accomplished in no particular order and may be accomplished 1 or more slots at a time. So the dialog manager’s strategy when a recognized entity is not equal/similar to its corresponding resume detail is by adding to a Trial List the slot it is pertaining to. Slots are added to the Trial list when their recognized entity and resume detail mismatched. The system will only throw clarification questions while the Trial List is not empty. If the Trial List is not empty or contains 1 or more slots, then the current clarification question will be about the 1st element/slot in the trial list. If there wasn’t any Slot Clarification that has happened before the current turn of conversation or the current slot to clarify was different from the previously clarified slot, then set the new slot to clarify as the current slot to clarify then example search using the domain, slot to clarify and VS as the search queries. Else, if the Error Trial Count has reached its limit, then set the wrong/mismatched entity as the final chat answer for the slot to clarify. The slot to clarify must then be removed from the trial list. If the Error Trial Count has not reached its limit yet, the dialog manager selects a rephrased question using the Example Selection Method. If the Trial List is empty or has now become empty or there is a new slot to clarify, then perform the normal Example Search method using the current Domain, DHV and VS. c.5) User answer is not equal/similar to its corresponding resume detail (For non-slot-filling tasks) Since the slots in a non-slot-filling task are accomplished one at a time. The comparing and clarifying of each slot is also one at a time. So if user answer is not equal/similar to its corresponding resume detail, then it will throw a rephrased question through example selection. If the error trial count has reached its limit, the latest user answer is set as the Final Chat Answer for the Current State/Slot to move on to the next slot/task.
Scoring and Transcript Output
The system scored the performance of the applicant by field/slot, by domain and by total score for the whole interview. The scoring was done depending on the state of the conversation. If the conversation was in the state where the system was asking general questions, the deduction for that domain increased for every trial which was done if the applicant didn’t answer any information or detail about any field/slot in that domain. The deduction for that domain was increased by 10 if the user’s answer was irrelevant to the question else the deduction was only increased by 5 if the user’s answer was relevant to the question.
For the scoring, the researchers grouped the cases according to its relevancy. The grouping of the relevant and irrelevant cases is shown in Table 3-3.
Table 33 Relevancy of Cases Kind of Domain | Cases | Relevancy | Slot-Filling Tasks | User asks a clarification question (c.1) | Relevant | | User asks an off-topic question (c.2) | Irrelevant | | No understood / recognized entity from the user’s answer (c.3) | Irrelevant | | Recognized entity is not equal/similar to its corresponding resume detail (c.4) | Irrelevant | Non-Slot Filling | User asks a clarification question (c.1) | Relevant | | User asks an off-topic question (c.2) | Irrelevant | | User answer is not equal/similar to its corresponding resume detail (c.3) | Irrelevant |

Meanwhile, Table 3-4 explains where the deductions are made depending on the current state of the conversation/the system’s question (General or Specific), how the deductions are made based from the formula. If the conversation is in the state where system was asking specifically about a slot/field for a certain domain, the deduction for that slot/field also increases for every trial which was done if the user answered incorrect or irrelevant or by asking a clarification question related to the question. Each field/slot has a default score of 100 and it was deducted by 10 if the user’s answer was irrelevant or incorrect else was deducted by 5 if the user replied a clarification question.

Table 34 Deductions for the current state of the conversation/ system question | Deduction | Formula | Relevant / Irrelevant | General/ Vague (Slot-Filling Tasks) | Domain/ Topic Score | (Sum of the scores of all its slots/tasks – deduction ) / Number of all its slots/tasks | 5 | 10 | Specific (Slot-Filling/Non-Slot Filling Tasks) | Slot/ Task Score | Slot/Task Score - deduction | 5 | 10 |

When a specific slot/field was finished, the default score will be subtracted by the total deduction for that slot/field. When the domain was finished, the system got the sum of all the net score of all the fields of that domain and was subtracted by the total deductions for that domain and got the average of that domain. After the interview, the system got the sum of all the average of the domains and got the average of the whole interview.
An example of the transcript output of the system or the generated conversation is shown in Figure 3-4.

Figure 3-4 Sample Transcript Output
Applicant Name: Rosales,NAOMI
Interview Date: 2013-9-24
Interview Time: 4:57:1

RHENZ: Hi NAOMI! I am Rhenz your interviewer for today! :) For this initial interview, I will be asking you a couple of questions about your educational background and work experience. So, let's begin! First, Let me know the details about your highest level of schooling.
NAOMI: I obtained my masters degree in education.
RHENZ: What course of education did you take up in your academic degree?
NAOMI: I took up Guidance.
RHENZ: Where did you acquire your highest educational attainment ?
NAOMI: I graduated from University of the Philippines.
RHENZ: When did you receive your diploma for your highest educational attainment?
NAOMI: I received my diploma on April, 2005
RHENZ: Tell me about your second highest educational attainment.
NAOMI: I graduated bachelor's degree in education
RHENZ: What university did you finish your second highest level of qualification?
NAOMI: I graduated from Philippine Normal University.
RHENZ: When did you graduate?
NAOMI: I had my diploma on March, 1997
RHENZ: Tell me the most significant project in your academic life.
NAOMI: I had a research on person's with disabilities
RHENZ: What has been your greatest achievement in your educational life?
NAOMI: I graduated Cum laude
RHENZ: Tell me details about your most recent job listed in your resume.
NAOMI: I am doing counseling and research
RHENZ: Can you please rephrase that? Again, Tell me specific information with regards to your recent employment details, such as your job title, company that you have worked for, etc.
NAOMI: I am a Guidance Counselor at the University of Santo Tomas. I am doing counseling and I spearhead seminars
RHENZ: How many years have you been with University of Santo Tomas?
NAOMI: I am currently on my 4th year.
RHENZ: Can you restate your answer? Again, Tell me the number of years you were employed in University of Santo Tomas.
NAOMI: I am currently on my 3rd year
RHENZ: What were your job tasks in University of Santo Tomas?
NAOMI: I do counseling and I spearhead seminars
RHENZ: Describe the kind of work you do in University of Santo Tomas.
NAOMI: I am doing counseling and career placement
RHENZ: Please share a specific project that you consider a great accomplishment in your recent work life.
NAOMI: Research, spearheads seminars
RHENZ: What capabilities have you learned in your job?
NAOMI: Competence
RHENZ: Tell me about cause for leaving that company or what made you apply to this company.
NAOMI: I needed a job
RHENZ: Thank you, we have reached the end of the interview. You will be informed as soon as the results are finalized. Have a good day.

Applicant Name: Rosales,NAOMI
Interview Date: 2013-9-24
Interview Time: 4:57:1

RHENZ: Hi NAOMI! I am Rhenz your interviewer for today! :) For this initial interview, I will be asking you a couple of questions about your educational background and work experience. So, let's begin! First, Let me know the details about your highest level of schooling.
NAOMI: I obtained my masters degree in education.
RHENZ: What course of education did you take up in your academic degree?
NAOMI: I took up Guidance.
RHENZ: Where did you acquire your highest educational attainment ?
NAOMI: I graduated from University of the Philippines.
RHENZ: When did you receive your diploma for your highest educational attainment?
NAOMI: I received my diploma on April, 2005
RHENZ: Tell me about your second highest educational attainment.
NAOMI: I graduated bachelor's degree in education
RHENZ: What university did you finish your second highest level of qualification?
NAOMI: I graduated from Philippine Normal University.
RHENZ: When did you graduate?
NAOMI: I had my diploma on March, 1997
RHENZ: Tell me the most significant project in your academic life.
NAOMI: I had a research on person's with disabilities
RHENZ: What has been your greatest achievement in your educational life?
NAOMI: I graduated Cum laude
RHENZ: Tell me details about your most recent job listed in your resume.
NAOMI: I am doing counseling and research
RHENZ: Can you please rephrase that? Again, Tell me specific information with regards to your recent employment details, such as your job title, company that you have worked for, etc.
NAOMI: I am a Guidance Counselor at the University of Santo Tomas. I am doing counseling and I spearhead seminars
RHENZ: How many years have you been with University of Santo Tomas?
NAOMI: I am currently on my 4th year.
RHENZ: Can you restate your answer? Again, Tell me the number of years you were employed in University of Santo Tomas.
NAOMI: I am currently on my 3rd year
RHENZ: What were your job tasks in University of Santo Tomas?
NAOMI: I do counseling and I spearhead seminars
RHENZ: Describe the kind of work you do in University of Santo Tomas.
NAOMI: I am doing counseling and career placement
RHENZ: Please share a specific project that you consider a great accomplishment in your recent work life.
NAOMI: Research, spearheads seminars
RHENZ: What capabilities have you learned in your job?
NAOMI: Competence
RHENZ: Tell me about cause for leaving that company or what made you apply to this company.
NAOMI: I needed a job
RHENZ: Thank you, we have reached the end of the interview. You will be informed as soon as the results are finalized. Have a good day.

ACKNOWLEDGMENTS
First of all, the proponents would like to thank the God Almighty for giving all the graces in order to complete this study. Second, to their very supportive thesis adviser, Ms. Ria Sagum, for providing them with much needed guidance and support. Third, to their families for their unending support during the study. Fourth, to the Human Resource practitioner, Ms. Florence Villones, who showed her support and contributed information on how to conduct an initial interview. Fifth, to the panelists, Ms. Donata Acula, Mr. Cecil Delfinado, and Ms. Charmaine Ponay for taking their time in supporting the proponents in their study and for their helpful opinions and recommendations. Lastly, to all their friends, colleagues and to everyone who has been a part of the researchers’ lives.
CONCLUSION
The study was able to solve the problems that were stated in the study. The results yielded from Chapter 4 show an accuracy of 0.95 or 95% in comparing and scoring the applicant’s answers. In determining the naturalness, the results yielded from Chapter 4 show that the respondents are neutral on the dialog naturalness. Also, the results show the respondents agree on the system’s task success. Lastly, the results yielded from Chapter 4 show that the respondents agree on the system’s usability. The group compared the resume-based employment hiring dialog system using enhanced example-based dialog model to a hierarchical reinforcement learning dialog system. The group chose the latter as its threshold due to the similarities of having dialog states, template-based questions/responses and being goal-oriented. The reinforcement learning dialog system garnered a score of 0.75 for its F-measure score for real vs. simulated coherent responses. On the other hand, the group’s study garnered a score of 0.95 for its F-measure score for HR vs. system-generated scores. This implies that the study distinctly has a fair accuracy when compared to other dialog models.
REFERENCES

Giuseppe Pirrò, Jérôme Euzenat: “A Feature and Information Theoretic Framework for Semantic Similarity and Relatedness”. Proceedings of the 9th International Semantic Web Conference (ISWC2010). LNCS 6496 Springer 2010, pp. 615-630.
"The Stanford NLP (Natural Language Processing) Group." The Stanford NLP (Natural Language Processing) Group. N.p., n.d. Web. 24 Sept. 2013. * <http://nlp.stanford.edu/software/CRF-NER.shtml>.
Lee, Cheongjae, Sangkeun Jung, Seokhwan Kim, and Gary Lee. "Example-based Dialog Modeling for Practical Multi-domain Dialog System." Science Direct. N.p., n.d. Web. <http://isoft.postech.ac.kr/publication/ijournal/specom09_lee.pdf>.
Allen, J., Byron, D., Dzikovska, M., Ferguson, G., Galescu, L., Stent, A., 2000. An architecture for a generic dialogue shell. Nat. Language Eng.6 (3), 1–16.
Aleven, Vincent, Octav Popescu, and Kenneth Koedinger. "Pedagogical Content Knowledge in a Tutorial Dialogue System to Support Self-Explanation." N.p., n.d. Web. <http://pact.cs.cmu.edu/koedinger/pubs/Aleven%20Popescu%20Koedinger%20aied01.pdf>.
Singh, Satinder, Michael Kearns, Diane Litman, and Marilyn Walker. "Reinforcement Learning for Spoken Dialogue Systems." N.p., n.d. Web. * <http://www.cis.upenn.edu/~mkearns/papers/rlds.ps>.
Ekeklint, Susanne, and Fredrik Kronlid. "The Need for Robustness in Dialog Systems." N.p., n.d. Web <http://www.speech.kth.se/~rolf/gslt_papers/FredikSusanne.pdf>.
"Chp 1: Expert Systems And Artificial Intelligence." 2003. 1 Oct. 2013 <http://www.wtec.org/loyola/kb/c1_s1.htm>
Trung, H. "Multimodal Dialogue Management - State of the art." 2006. <https://trac.v2.nl/export/5433/andres/Documentation/Not%20classified/multimodal%20dialogue%20management%20-%20state%20of%20the%20art.pdf>
Cenek, Pavel. "Hybrid Dialogue Management in Frame-Based Dialogue Systems Exploiting VoiceXML." Fakulta Informatiky MU, Botanická 68 (2004).
Fodor, Paul. "Dialog management for decision processes." Proceedings of the 3rd Language and Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics 2007: 1-4.
Pietquin, Olivier. Framework for Unsupervised Learning of Dialogue Strategies (a). www. i6doc. com, 2004.
Hardy, Hilda et al. "Data-driven strategies for an automated dialogue system." Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics 21 Jul. 2004: 71.
Bermúdez, Meritxell González, and Marta Gatius Vila. DIGUI: A Flexible Dialogue System for Guiding the User Interaction to Access Web Services. Universitat Politècnica de Catalunya, 2011.
"Spoken dialogue systems - Speech, Music and Hearing." 2010. 1 Oct. 2013 * <http://www.speech.kth.se/~gabriel/thesis/chapter2.pdf>
Krishnan, V. "Named Entity Recognition - CS 229." 2005. <http://cs229.stanford.edu/proj2005/KrishnanGanapathy-NamedEntityRecognition.pdf>
"Natural Language Generation - MIT Encyclopedia of Cognitive ..." 2003. 1 Oct. 2013 <http://ai.ato.ms/MITECS/Entry/hovy2.html>
Abu Shawar, Bayan, and Eric Atwell. "Machine Learning from dialogue corpora to generate chatbots." Expert Update journal 6.3 (2003): 25-29.
"CiteSeerX — Background The Learning Chatbot." 2009. 1 Oct. 2013 * <http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.92.480>
Graesser, Arthur C et al. "AutoTutor: An intelligent tutoring system with mixed-initiative dialogue." Education, IEEE Transactions on 48.4 (2005): 612-618.
Hubal, RC. "AVATALK Virtual Humans for Training with Computer Generated ..." 2000. * <http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.111.9724>
A mixed-initiative natural dialogue system for conference room ..." 2012. 1 Oct. 2013 * <http://www.researchgate.net/publication/221488696_A_mixedinitiative_natural_dialogue_system_for_conference_room_reservation>
Field, D. "The Senior Companion: a Semantic Web Dialogue ... - aamas 2003." 2009. * <http://www.ifaamas.org/Proceedings/aamas09/pdf/06_Demos/d_07.pdf>
"Video Interviewing: The next step in the hiring process | Online ..." 2012. 1 Oct. 2013 * <http://www.webrecruit.co.uk/blog/job-interview-tips-advice/video-interviewing-sonru/>
"VidCruiter | CrunchBase Profile." 2009. 1 Oct. 2013 <http://www.crunchbase.com/company/vidcruiter> * * * *

Similar Documents