Premium Essay


In: Other Topics

Submitted By yardenc1994
Words 316
Pages 2
Data Verification- Effectiveness of Testing

• Prepared a survey to ask 20 students 10 girls, 10 boys

• Each person was allowed to tick one option in each question

• The questions of the survey were:

1. About how many tests do you have per week? -less than 2 (7/7 -more than 2 (3/3) -more than 3 (0) 2. Do you get good grades in most tests? -yes (5/7) -no (1/0) -so so (4/3) 3. Do you find the tests in our school helpful? -yes (5/3) -not at all (0) -some are helpful some are not (5/7) 4. Which subjects are good to be tested on? -Math (5/5) -Languages (2/3) -Sciences (3/2) 5. Do you think testings like the IGCSE exams will help you in the future? -Yes, I think they will prepare me for university (9/9) -I think some are not useful, but some are (1/1) - No, I think they are not useful at all (0)

• Our results were: BOYS GIRLS -results are next to the questions


-Boys and girls have the same amount of tests. More of them have less than 2 per week (not including quizzes) -Most girls get good grades, more boys said ‘so so’ than girls. - Half of the boys said tests are good, half said some are and some aren’t. More Girls said some are and some aren’t than ‘yes’. - Half of the girls and half of the boys said math is the most important subject to be tested on. More boys preferred science on top of languages and more girls preferred languages on top of sciences, -Almost all girls and boys agreed that international testing is important for university.

-In general, there is an agreement that generally testing is good to prepare for...

Similar Documents

Premium Essay


...Discuss the importance of data accuracy. Inaccurate data leads to inaccurate information. What can be some of the consequences of data inaccuracy? What can be done to ensure data accuracy? Data accuracy is important because inaccurate data leads may lead to such things as the closing down of business, it may also lead to the loosing of jobs, and it may also lead to the failure of a new product. To ensure that one’s data is accurate one may double check the data given to them, as well as has more than one person researching the data they are researching. Project 3C and 3D Mastering Excel: Project 3G CGS2100L - Section 856 MAN3065 - Section 846 | | 1. (Introductory) Do you think Taco Bell was treated fairly by the mass media when the allegations were made about the meat filling in its tacos? I think so being that they are serving the people for which I must say that if you are serving the people then it’s in the people rights to know what exactly you are serving them. 2. (Advanced) Do you think the law firm would have dropped its suit against Taco Bell if there were real merits to the case? It’s hard to say but do think that with real merits it would have changed the playing feel for wit real merits whose the say that Taco Bell wouldn’t have had an upper hand in the case. 3. (Advanced) Do you think many people who saw television and newspaper coverage about Taco Bell's meat filling being questionable will see the news about the lawsuit being withdrawn? I doubt......

Words: 857 - Pages: 4

Free Essay


...Import Data from CSV into Framework Manager 1. Save all your tables or .csv file under one folder. In our case we will use the Test folder saved on blackboard with three .csv files named TestData_Agent.csv, TestData_Customer.csv, TestData_InsuranceCompany.csv. 2. Now , locate the correct ODBC exe at “C:\Windows\SysWOW64\odbcad32.exe” 3. Once the ODBC Data Source Administrator is open, go to the “System DSN” tab and click “Add”. 4. Select “Microsoft Text Driver (*.txt, *.csv)” if you want to import from csv files. 5. Unclick the “Use Current Directory”, and then click Select Directory to define the path of your data source. Give data source a name as well. Let’s use TestData in this case. NOTE: All the files under the specified location will be selected by default. 6. Again press ok and close the dialogue. Now we will import this Database/csv files into Cognos using Framework Manager. 7. Now Go to find Framework Manager. C:\Program Files (x86)\ibm\Cognos Express Clients\Framework Manager\IBM Cognos Express Framework Manager\bin 8. Right click on 'FM.exe', and then select 'Properties'. Click 'Compatibility' tab. Check "Run this program as an administrator' under 'Privilege Level'.  9. Open Framework Manager and create a new project and give it any name, in this case CSV_MiniProject. Then click OK. 10. Put the username: “Administrator” and password:”win7user”. 11. Select Language as English and hit ok. 12. Select Data......

Words: 775 - Pages: 4

Free Essay


...instructed to backfill with temporary labour. The collated data is being used to investigate the effect of this shift in labour pattern, paying particular attention to staff retention. The table below gives a month by month record of how many staff have been employed, temporary and permanent , how many temporary staff are left at the end of each month compared to how many are left that are on a permanent contract. Month | Temporary staff | permanent staff | total | permanent leavers | Temporary leavers | total leavers | Jan-15 | 166 | 359 | 525 | 7 | 2 | 9 | Feb-15 | 181 | 344 | 525 | 15 | 5 | 20 | Mar-15 | 181 | 344 | 525 | 0 | 7 | 7 | Apr-15 | 204 | 321 | 525 | 23 | 7 | 30 | May-15 | 235 | 290 | 525 | 31 | 12 | 43 | Jun-15 | 238 | 287 | 525 | 3 | 17 | 20 | Jul-15 | 250 | 275 | 525 | 12 | 42 | 54 | Aug-15 | 267 | 258 | 525 | 17 | 23 | 40 | Sep-15 | 277 | 248 | 525 | 10 | 27 | 37 | Oct-15 | 286 | 239 | 525 | 9 | 30 | 39 | Nov-15 | 288 | 237 | 525 | 2 | 34 | 36 | Dec-15 | 304 | 221 | 525 | 16 | 45 | 61 | Jan-16 | 305 | 220 | 525 | 1 | 53 | 54 | Feb-16 | 308 | 217 | 525 | 3 | 57 | 60 | An explanation of how I analysed and interpreted the data To make a comparison between the labour pattern and retention, I placed the above data into a line graph this gives a more of an idea to trends over the period My Findings The actual level of staff has remained constant throughout the data collated, as each job requires a specific amount......

Words: 621 - Pages: 3

Free Essay


...Data Collection - Ballard Integrated Managed Services, Inc. (BIMS) Learning Team C QNT/351 September 22, 2015 Michael Smith Data Collection - Ballard Integrated Managed Services, Inc. (BIMS) Identify types of data collected--quantitative, qualitative, or both--and how the data is collected. A survey was sent out to all the employees’ two paychecks prior and a notice to complete the survey was included with their most recent paychecks. After reviewing the surveys that have been returned it was found that the data collected is both quantitative and qualitative. Questions one thru ten are considered qualitative data because the response for those questions are numbered from one (very negative) to five (very positive), which are measurements that cannot be measured on a natural numerical scale. They can only be classified or grouped into one of the categories and are simply selected numerical codes. Then, questions A-D could fall under quantitative data because it can determine the number of employees in each department, whether they are male or female and amount of time employed with the company. From that data it is able to find an average of time employed, then subcategorize by department, gender and if they are a supervisor or manager. Identify the level of measurement for each of the variables involved in the study. For qualitative variable there are a couple levels of measurements. Questions A, C, and D in Exhibit A fall in nominal-level data because when......

Words: 594 - Pages: 3

Premium Essay

Big Data and Data Analytics

...Big Data and Data Analytics for Managers Q1. What is meant by Big Data? How is it characterized? Give examples of Big Data. Ans. Big data applies to information that can’t be processed or analysed using traditional processes or tools or software techniques. The data which is massive in volume and can be both structured or unstructured data. Though, it is a bit challenging for enterprises to handle such huge amount fast moving data or one which exceeds the current processing capacity, still there lies a great potential to help companies to take faster and intelligent decisions and improve operations. There are three characteristics that define big data, which are: 1. Volume 2. Velocity 3. Variety * Volume: The volume of data under analysis is large. Many factors contribute to the increase in data volume, for example, * Transaction-based data stored through the years. * Unstructured data streaming in social media. * Such data are bank data (details of the bank account holders) or data in e-commerce wherein customers data is required for a transaction. Earlier there used to data storage issues, but with big data analytics this problem has been solved. Big data stores data in clusters across machines, also helping the user on how to access and analyse that data. * Velocity: Data is streaming in at unprecedented speed and must be dealt with in a timely manner. RFID tags, sensors and smart metering are driving the need to deal......

Words: 973 - Pages: 4

Free Essay

Data Information

...Data and Information Summary HCI/520 11/18/2013 Data and Information Summary Today we live in a world where data is a critical resource. Information is also a critical resource and consists of data that is processed into meaningful information for the purpose of organizations and users. Collected data is stored into what is known as databases where it is organized into potentially valuable information. Data also known as Raw data is a stream of facts that are not organized or arranged into a form that people can understand or use (Gillenson, Ponniah, Kriegel, Trukhnov, Taylor, Powell, & Miller, 2008) . Raw Data are facts that have not yet been processed to reveal their meaning (Gillenson, Ponniah, Kriegel, Trukhnov, Taylor, Powell, & Miller, 2008). For example when AT&T wireless ask their clients to participate in a survey about the products they have purchased or how was their customer service experience the data collected is useful but not until the raw data is organized by combining it with other similar data and analyzed into meaningful information. Information is the result of processing raw data to reveal its meaning (Coronel, Morris, & Rob, 2010). Data processing can be as simple as organizing data to reveal patterns or as complex as making forecasts or drawing inferences using statistical modeling (Gillenson, Ponniah, Kriegel, Trukhnov, Taylor, Powell, & Miller, 2008). Both data and information are types of knowledge which share......

Words: 538 - Pages: 3

Premium Essay

Data Management

...University Data Management March 18, 2014 Data partitioning is a tool that can help manage the day-to-day needs of an organization. Each organization has unique values that drive business. All organizations have policies and processes that are influenced by their environment and industry. The use of data partitioning can help productivity by recognizing the need to categorize data to tailor unique needs. This approach does require some effort. To transition to a new database approach, organizations need to assess the pros and cons of a database transition. The scale of an organization’s database may be the one factor that drives an adoption of this approach. Data partitioning has been developed to address issues that traditional database queries have created. One main problem that partitioning was created to solve is the performance of database queries. According to Alsultanny (2010), “System performance is one of the problems where a significant amount of query processing time is spent on full scans of large relational tables. Partitioning of the vertical or horizontal partitioning or a combination of both is a reliable way of solving the problem” (p.273). By separating queries into either horizontal or vertical processes, the user can avoid delays and strains on a database. This saves time which can be used to improve the productivity an organization has towards their day-to-day operations. Large-scale databases receive the most benefits from partitioning. ......

Words: 1572 - Pages: 7

Free Essay

Data Breaches

...Daniel Baxter Nico Ferragamo Han Vo Romilla Syed IT 110 8 December 2015 Data Breaches The Case In July of 2014 JPMorgan Chase, a multinational banking and financial services holding company was hacked. JPMorgan Chase is the largest bank in the United States, the sixth largest bank in the world, and the world’s third largest public company. Initial reports from JPMorgan Chase stated that the attack had only breached about one million accounts. Further details revealed that the hack breached the accounts of seventy-six million households (roughly two-thirds of the total number of households in the United States), and about seven million small businesses. While the hack began in July, it was not fully stopped until the middle of August, and it was not disclosed to the public until September. The hack is considered to be one of the most serious attacks on an American Corporation’s information systems and is one of the largest data breaches in history. JPMorgan Chase claims that the login information associated with the accounts (such as social security numbers and passwords) was not compromised, and the information that was stolen had not been involved in any fraudulent activities, however, the names, email addresses, physical addresses, and phone numbers on the accounts were taken by the hackers. The hack was believed to have been committed by a group of Russian hackers. It’s also believed to have been part of a large ring of attempted attacks on as many as nine banks......

Words: 1557 - Pages: 7

Free Essay

Multivariate Data

...Executive Summary Multivariate data is a key part of any interaction in business. The data can be used to anticipate the effect of several variables. Multivariate relationships involve multiple independent variables affecting a dependent variable. These independent variables have a distinct and measurable effect on the dependent variable. These relationships can be used by managers to make decisions. The example given is that of an automobile manufacturer that uses the data to change the methods of scheduled maintenance without affecting the longevity of the vehicle. Multivariate data can show managers how different aspects can affect an outcome. Multivariate Data Multivariate data is a system of relationships that governs nearly any interactions between objects. These data relationships show how one set of variables can have an effect on another. Whenever something happens, it happens because of many factors that come into play; several things have to come together to create the effect observed. This is true of things in nature, occurrences in life, and decisions in business. Multivariate relationships are everywhere, and the effect they have is widespread. The ability to recognize and analyze these variables can be a strong asset in business management as understanding what drives certain effects can allow a manager to more accurately predict outcomes. Being able to accurately model what is going to happen is a distinct advantage for any manager. ...

Words: 908 - Pages: 4

Premium Essay

A Data Goldmine

...A Data Goldmine Sean A Diehl BUS 540 Professor Tom Voight Ohio Dominican University March 27, 2012 A Data Goldmine According to Barron’s Business Guides, “Database: a collection of data stored on a computer storage medium, such as a disk, that can be used for more than one purpose. For example, a firm that maintains a database containing information on its employees will be able to sue the same data for payroll, personnel, and other purposes” Dictionary of Computer and Internet Terms (1995, p. 95). Databases are considered to be wealth of resources and opportunity in today’s information society. The topic of managing data and data integration is the centerpiece of author, Tom Kelly’s article, “Managing Multiple Companies in the Cloud”. Within the first paragraph, it is readily apparent that Kelly’s take on his use of cloud applications for business management is boastful and confident. According the Kelly, “…The single biggest advantage of running nine companies in the cloud is that I can be anywhere and have full access to each client's key processes through nearly any Internet-enabled device. My clients don't need to build out complicated networks and VPNs, and I don't need to juggle nine different laptops to keep data segregated. At the click of a button, I can seamlessly change hats…” (Kelly, 2011). The author’s decision to use the cloud for management of business data is made through his use of the 80/20 rule. Kelly explains that if he is able to...

Words: 515 - Pages: 3

Premium Essay

Primary Data vs Secondary Data

...Differences Between Primary Data vs Secondary Data -Submitted by Arvind Kartik SOURCES OF PRIMARY DATA Regardless of any difficulty one can face in collecting primary data; it is the most authentic and reliable data source. Following are some of the sources of primary data. Experiments Experiments require an artificial or natural setting in which to perform logical study to collect data. Experiments are more suitable for medicine, psychological studies, nutrition and for other scientific studies. In experiments the experimenter has to keep control over the influence of any extraneous variable on the results. Survey Survey is most commonly used method in social sciences, management, marketing and psychology to some extent. Surveys can be conducted in different methods. Questionnaire: It is the most commonly used method in survey. Questionnaires are a list of questions either an open-ended or close -ended for which the respondent give answers. Questionnaire can be conducted via telephone, mail, live in a public area, or in an institute, through electronic mail or through fax and other methods. Interview : It is a face-to-face conversation with the respondent. It is slow, expensive, and they take people away from their regular jobs, but they allow in-depth questioning and follow-up questions. The interviewer can not only record the statements the interviewee speaks but he can observe the body language or non-verbal communication such as face-pulling,......

Words: 659 - Pages: 3

Free Essay

Big Data

...Article Summary - Data, data everywhere Data 2013.10.01 | Major Media Communication | Subject Understanding Digital Media | Student no 2010017713 | Professor Soochul Kim | Name Eunkang Kim | Double-side of a vast amount of information in accordance with development of technology is treated in this article. Even now, a lot of digital information beyond imagination is being accumulated all over the world. Not only the amount of information is increasing, but the production rate of one is also getting speedy. This explosion of information has some reasons. The main reason is technology development. It can actualize things which were impossible in the past. The digital technology changes a lot of information into digitization. Also, many people utilize them with the powerful mean digital device. Men communicating by information contributed to increase the amount of information. Humans who escaped from illiteracy and economic hardship have generated many kinds of information, which are utilized in several fields such as politics, economy, law, culture, science, and so on. The production rate of information is faster than the speed of technology development. Though the digital devices handling the information are getting various, storage space is not enough to store the increased information. Sea is not calm, but it has that big waves. Likewise, lots of information comes to our life. It is important to judge what information......

Words: 614 - Pages: 3

Premium Essay

Knbs Data

...KNBS DATA DISSEMINATION AND ACCESS POLICY November 2012 VISION A centre of excellence in statistics production and management MISSION To effectively manage and coordinate the entire national statistical system to enhance statistical production and utilization Herufi House, Lt. Tumbo lane P.O. Box 30266 – 00100 GPO Nairobi, Kenya Tel: +254-20-317583/86/88,317612/22/23/51 Fax: +254 – 20-315977 Email: Web: i WI-83-1-1 Preface Kenya National Bureau of Statistics (KNBS) is the principal agency of the Government for collecting, analysing and disseminating statistical data in Kenya. KNBS is the custodian of official statistical information and is mandated to coordinate all statistical activities, and the National Statistical System (NSS) in the country. Official statistics are data produced and disseminated within the scope of the Statistical Programme of the National Statistical System (NSS) in compliance with international standards. To achieve this mandate, KNBS strives to live up to the aspirations of its vision; to be a centre of excellence in statistics production and management. Chapter Four on The Bill of Rights section 35 of the new constitution in Kenya gives every citizen right of access to information held by the State. This policy document strives to provide a framework for availing statistical information to the public in conformity with this bill and government’s open data......

Words: 3544 - Pages: 15

Free Essay

Bad Data

...Bad Data AMU DEFM420 Big Data refers to volume, variety and velocity of available data. The issue with that is that any emphasis is put on volume or quantity of data. The quantity is a very vague element of Big Data, there is no precise requirements to purely volume based data. What should be considered in big data, the complexity and depth of the data? If the content of data is deep and containing detailed information it holds more purpose. When we analyzes data we as a culture prefer less to review but of more importance. I would rather read two pages of relevant data then read one hundred pages that contain 3 pages of data. This is a factor of human nature but also a business factor. The majority of what we do in government work is time sensitive. We operate on a system of end dates. With time being a factor wasting time on Big Data that isn’t always pertinent is a waste. While in cases of no time limit, having the full three V’s of big data is acceptable and may in the end give more accurate information after spending excessive time sorting through the information mainly the volume portions. Is the system of Big Data wrong? No it is not wrong but the concept is too vague. For different situations data needs to be limited. Others not so much so it gives us a system and collection of information that is in some cases excessive for the need. It is a double edged sword. There are other aspects of Big Data collections useful in contracting......

Words: 325 - Pages: 2

Free Essay

Data & Information to function without data, information and knowledge. Data, information and knowledge are different from one another, yet there are interrelated to each other. Data Data are unprocessed raw facts which can be either qualified and or quantified. It explains phenomenal refer to statistical observations and other recordings or collections of evidence (Chaim Zins, 2007, p.480). Data can be in numbers or text. For example, temperature currency, gender, age, body weight. Figure 1, is example of data recorded in Microsoft Excel data sheet. Figure 1 Information The outcome of data processing is information. Figure 2 expresses the process of how data is being transformed to information. Data which is the input when being processed such as organized, examined, analyzed, summarized it gives the output as information. Information is processed data which gives explicit meaning to its readers. Based on Figure 1 data, after processed them, gives you the information of the percentage of a group of 24 youth, the number of times they eat fast food in a week as shown in Figure 3. Figure 3 show that youth in their twenties eats fast food at least once a week there are even a small number of them (4.1%) takes fast food almost every day (6 times/ week). It gives the information about the demand of fast food among youth in their twenties. Figure 3 The average age of this group also can be obtained from the data in the excel data sheet in Figure 4.......

Words: 285 - Pages: 2