Free Essay

Sdffewdfs

In: Business and Management

Submitted By sjfqzhx
Words 8961
Pages 36
1. Ramada Demonstrates Its Personal Best
In 1996 the latest D. K. Shifflet survey of customer satisfaction in the hospitality industry showed mid-tier hotels continuing their downward trend in perceived customer service, reflected by more and more respondents giving ratings on customer service in the 7 or lower range on Shifflet’s 10-point scale. While Ramada’s satisfaction rates held steady, “It was only a matter of time before we experienced the problem,” says Tim Pigsley, director of operations for Ramada Franchise Systems (RFS). Shifflet research highlighted three critical areas for study that could influence customer satisfaction: hiring (finding the best people to deliver Ramada’s brand of exceptional service), training (giving employees the tools to deliver exceptional service), and motivation (providing the impetus for Ramada employees to deliver exceptional service).

Unlike some of its competitors, RFS is a totally franchised system. In such an environment, not only must headquarters contend with the variable human factor of all service operations, but additionally, RFS must contend with differing “exceptional service” standards among owners of the nearly 900 Ramada properties. “Due to the franchised system of property management, we needed for each management team and each employee to be committed to the change—to buy in to any new program—whatever shape it would take,” explains Pigsley.

“We wanted to learn and borrow from the best so we started with Disney. In every study done, the Disney experience is the benchmark for exceptional customer service. And they have a reputation for hiring the best people.” Next, RFS approached Southwest Airlines. “They have captured the essence of ‘fun’ when air travel is seen as a commodity, a hassle. People disembarking Southwest planes have smiles on their faces,” shares Pigsley. Next Ramada’s fact-finders approached Carlson Hospitality, owners of restaurant TGI Friday’s. “We wanted to understand what Carlson did to generate their low employee turnover, and high employee loyalty and commitment.”

Ramada’s individual property owners do their own hiring. The process differs widely from property to property. Ramada called on research firm Predictive Index to identify characteristics that were indicative of self-motivated performers. Ramada also brought in American Hotel and Motel Educational Institute to learn what other companies were doing correctly to identify and hire the right people.

RFS also wanted direct, face-to-face employee input into the process of developing new programs in hiring, training, and motivation. “But this was a daunting prospect with more than 31,000 employees, many of whom spoke a language other than English,” explains Pigsley. Twenty-four researchers spanned out to visit each of Ramada’s 900 properties within a six-month period. “To bring about change in corporate culture and mindset would take more than employees checking off boxes on a piece of paper,” claims Pigsley. So Ramada launched the research project more like the opening of a new hotel—a festive atmosphere, complete with food and comedic entertainment. Headquarters staff arrived at each property, usually spending the morning extracting issues and information from management. Then in an atmosphere evocative of a new hotel launch, employees were invited to share their ideas and concerns about the three initiatives. Employee suggestions and needs flowed as freely as the food and beverages. The information collection team recorded employee and management input on a detailed summary form generated for each property.

Research with employees revealed the current training approach was boring and ineffective. Most training involved videotapes, developed internally or purchased, with new hires or groups of employees watching the videos. RFS’s benchmarking research with the hospitality industry’s stellar examples of exceptional customer satisfaction, however, demonstrated that training incorporating high employee involvement generates more knowledgeable employees, one of the critical elements of customers’ perceptions of higher quality customer service. And training approaches that involve “fun” are winners with all employees—no matter what position they fill— and are more likely to generate a positive employee attitude, a second critical element of exceptional customer service.

It was standard industry practice for employee motivation programs to develop around a limited number of big-ticket rewards. Employees indicated that they had a hard time maintaining enthusiasm for a program that took too much effort to achieve one or a limited number of rewards over a long time. RFS found that more numerous awards that directly affect their everyday lives motivate employees.

Before Ramada started on its program of change, it knew it would need to document the program’s success. So it hired Unifocus to conduct in-depth guest surveys at every property as the Personal Best program rolled out. Additionally, it continues to subscribe to D. K. Shifflet’s syndicated research on customer satisfaction.

In hiring, Ramada property managers now screen prospective employees for characteristics revealed by Predictive Index. RFS scrapped its traditional training, replacing it with interactive, CD-based, multimedia training. Self-paced learning now drives the lighthearted, 24-component training sequence. Property managers, who often do not hire large numbers of employees at any one time, are pleased with the more flexible approach and employees find the process more interesting.

The newly devised motivation program focuses on rewarding employees, not only for exceptional performance reflected in customer letters and surveys, but also for supervisor and peer nominations, completion of training modules, and continued selfdirected efforts for personal development by employees. “We had had grandiose ideas of awarding big-ticket items like airline tickets to the vacation of a lifetime, but after listening to employees, we substituted certificates for shoes at FootLocker, lunch at Macaroni Grill, and free tanks of gas. We literally have hundreds of reward partners in the Personal Best program,” reveals Pigsley, “all related to the way our 31,000 employees spend their personal time.”

By many standards the Personal Best initiative is a success.

• In the latest D. K. Shifflet service ratings, Ramada’s scores in the 8–10 range (good to exceptional) were up 30.5 percent, and its scores in the 1–4 range (unacceptable to poor) were down 24 percent.

• Employees are cashing in exceptional service points for a growing number of rewards each year.

• Personal Best is no longer just a human resources program but an overall strategic planning initiative. Employees’ stories of exceptional customer service are prominently reflected in Ramada’s advertising, and RFS has committed $8 million over the past three years to sharing these stories.

“Ramada’s Personal Best hospitality advertising campaign (winner of the travel industry’s most prestigious advertising award: HSMAI’s Best of Show) is a reflection of our commitment to the employee of Ramada franchises,” says Steve Belmonte, president and CEO of RFS, Inc. One spot’s closing line, “At Ramada, we throw ourselves into our work,” sums up the effort that Ramada is placing on customer satisfaction—an effort that won it the 1999 American Express “Best Practice” award.

Question:

1. What were the management problems and research objectives (management-research question hierarchy) of Ramada?

2. Describe the research process of the Ramada’s research.

a. Explain the role and process of exploration in Ramada’s research.

b. What role did secondary data play in the exploration phase of the research?

c. What research process decisions were made? (Remember to include research by outside suppliers.)

d. What sampling methodology was used? Why was this appropriate for this study?

e. What role did property owners/managers play in the research design?

f. Why did Ramada choose to conduct the research in a nontraditional, party-like atmosphere? What are some advantages and disadvantages of such an approach?

3. How were the research findings reflected in the ultimate management decisions?

2. State Farm: Dangerous Intersections

State Farm Insurance has a rich history of proactive safety involvement in auto and appliance design to reduce injury and property loss. In June 2001, State Farm Insurance, Inc., released the second report in its Dangerous Intersection reporting series.

State Farm modeled its program after an initiative by the Insurance Corporation of British Columbia, Canada (ICBC), and the American Automobile Association of Michigan (AAA) to help position the nation’s largest auto insurer as the most safetyconscious insurer. ICBC had patterned its program on an earlier effort in Victoria, Australia. AAA, in turn, benchmarked its program on the ICBC program. AAA invited State Farm to help fund one of its intersection studies. State Farm saw this as an opportunity to expand its effort into a nationwide campaign in 1999. “The 2001 study is part of a larger effort focused on loss prevention and improving the safety of intersections around the U.S.A.,” shared State Farm research engineer John Nepomuceno. State Farm has allocated significant resources as well as funds to the initiative. Since its inception, every city with an intersection on the overall list of dangerous intersections is eligible to apply for a $20,000 grant to defray the cost of a comprehensive traffic engineering study of the intersection. Additionally, each city named to the national top 10 dangerous intersection list is eligible for a grant of $100,000 per intersection to defray some of the cost of making improvements. All totaled, State Farm offered $4.44 million to the safety initiative in its first year.

Due to its large market share, State Farm is the only U.S. insurer in a position to mine its databases for the requisite information on accidents to come up with a viable U.S. list. But it found that although it had the interest to do so, its data warehouse did not have sufficient information to tally accident rates for intersections. To rectify this, in 1998 State Farm included a location field as part of the data that its claims adjusters regularly complete. This location information, in open-text format, indicates whether the accident took place in an intersection or as part of an incident related to an intersection accident, and identifies the intersection. Following the 1999 study, the fields for identifying intersections were further refined.

In the first study using 1998 data (reported in June 1999) as well as the 2001 study, State Farm looked at accidents involving only intersecting roads. They excluded any accident that occurred at the intersection of a road and a highway access or egress ramp. State Farm also looked only at accidents where the State Farm–insured driver was at fault.

Because of the study’s focus on road safety engineering, the first study ignored accident severity and made no attempt to isolate demographic (age or gender of driver, driving record, etc.) or geographic (weather conditions, population of area, etc.) factors related to the accident. It also looked only at State Farm’s own internal incident reports, not at any public records involving traffic patterns or volume or police incident reports.

Based on industry market share information, State Farm was able to estimate the total number of crashes at a given intersection. “There was good reason to exclude police reports and traffic counts,” explained Nepomuceno. “The reporting threshold for police filing reports on accidents differs widely from jurisdiction to jurisdiction. Some will only fill out reports when personal injury or criminal behavior is involved. Others will fill them out only when a vehicle is damaged to the degree that it needs to be towed from the scene. Still others fill out such reports on every incident. Traffic volume reports are often prepared infrequently and often by independent sources. Not only may the data quality be questionable, but the time period in which the data was collected may not match our 1998 incident reports in every city involved. Also, when traffic volumes are factored in, low volume roads with relatively few crashes are often deprioritized. Now that we’re through with the 2001 study, we are asking ourselves if intersection volume should be factored in, and if so, how it can be included without significantly increasing our effort in data processing.”

In the 1998 study, State Farm identified 172 dangerous intersections. The top 10 most dangerous intersections in the United States were released publicly (www.statefarm. com). Public affairs staff for each state could request that up to 10 intersections be identified for their state. “This was usually determined by the resources that our local public affairs staff were willing to put toward the program,” shared Nepomuceno.

“Each state had to recognize a top 10 national intersection, but they could request that no more be released or that up to 10 intersections within their state be released.” As of August 2001, 97 cities (56.4 percent) had applied for State Farm grants.

“While some in the media claimed we had ‘hit a home run’ with the program, we quickly learned that there was a lot more at stake than we had anticipated in generating goodwill with transportation engineers,” indicated Nepomuceno. “This is, after all, a traffic safety program and we would not achieve that goal without having the cooperation of the traffic and transportation engineering community. First, while initially they lauded us for the attention our listing brought to traffic concerns, we and they soon discovered that the spotlight generated demand for immediate solutions, solutions that they often didn’t have budgets to implement. Also, from their perspective, not all accidents are the same; locations with accidents that result in injuries and death should be given more attention. Some jurisdictions were upset that we didn’t consider intersection volume and we didn’t include accident rate data.1 The fact that the State Farm grants were intended to study the intersection more completely wasn’t always seen as a solution to their immediate problems.”

To include accident severity, State Farm needed a measurement system for classifying accidents. For the 2001 study, which used 1999 and 2000 accident data, State Farm calculated a median property damage accident payout (approximately $1,700).

Incidents requiring payout of more than the median amount were classified as “high severity”; those requiring less, “low severity.” Additionally, State Farm chose to classify each accident using a multipoint scale. Zero was assigned to “no property damage, no personal injury” incidents and a higher number was assigned to “High property damage, personal injury” incidents, with numbers in between assigned to levels of property damage and personal injury (see Exhibit C-SF 1–1). Accident scores were summed to create an aggregate danger index for each intersection. Each intersection was then weighted by dividing the danger index by State Farm’s market share in the area. Of the 224 intersections identified, the top 10 were released to the national media.

Each of those 224 is now eligible for the $20,000 grant to study the intersection to identify specific improvements; the top 10 are also eligible for $100,000 grants for improvements. In this second round, State Farm has committed $5.48 million to the safety program.

State Farm is making plans to track the success of the Dangerous Intersection program. Once cities notify them of the completion of an intersection’s improvements, State Farm will start tracking accidents for that intersection for a period of one year. The first post-improvement evaluation study is expected in 2002. Additionally, State Farm is taking steps to learn from the characteristics of the dangerous intersections. Each grant application for an affected city’s study of a dangerous intersection must include:

• Collection and analysis of police report data.

• An engineer’s “geometric review”2of the intersection.

• A capacity profile of the intersection.

• A traffic conflict study.3

• A benefit-cost analysis.

• A schedule of improvements (short-term, intermediate-term, and long-term).4

State Farm plans to use the new data to identify patterns of problems. This may lead to a model of desired intersection traits against which improvement plans can be assessed, further increasing the effectiveness of the loss prevention program and making life a little easier for the transportation engineers with whom they must partner to achieve safety success.

EXHIBIT C-SF 1–1 Danger Codes
[pic]

Questions

1. Identify the various constructs and concepts involved in the study.

2. What hypothesis might drive the research of one of the cities on the top 10 dangerous intersection list?

3. Evaluate the methodology for State Farm’s research.

4. If you were State Farm, how would you address the concerns of transportation engineers?

5. If you were State Farm, would you use traffic volume counts as part of the 2003 study? What concerns, other than those expressed by Nepomuceno, do you have?

3. Calling up Attendance: TeleCenter System Users Forum
Nashville-based TCS Management Group markets TeleCenter System, software used toforecast staffing needs for reservation centers, order centers, or customer service centers. Using TeleCenter System allows companies to have the correct number of people on duty at any given hour of the day or night, thereby optimizing the delivery of good service while holding costs as low as possible. TCS has an impressive list of customers, including American Express, British Airways, Sears, Amtrak, and Citicorp.

TCS was planning a special two-day educational event, Users Forum, for its 300-plus customers, but was unsure how many TeleCenter System users would attend. Scheduled at the Opryland Hotel, the forum would offer speakers, workshops, and presentations. While TCS would underwrite the costs associated with planning the meeting and preparing the presentations, customers would be responsible for paying a fee to attend, as well as their own hotel and travel expenses to Nashville. “Ten weeks before the forum, we weren’t sure whether we would have 40 people or 140 people coming to Nashville,” shared Jim Gordon, CEO of TCS.

While TCS had previously done most of its own customer satisfaction research, given the time frame of the need, it turned to Nashville-based Prince Marketing, who promised to design, conduct, and interpret survey results within 21 days.

Three objectives were set for the phone survey:

• Determine the likelihood of Users Forum attendance.

• Update the TCS software users database (for subsequent use in mailing quarterly ewsletters, special announcements, and software updates).

• Measure the level of user satisfaction (with the company and its software generally, as well as regarding specific software features and issues).

Respondents were asked to rate on a 7-point scale the software’s ease of use, the usefulness of software-generated reports, and satisfaction with service. They were also asked whether they would recommend the software and why/why not; whether they were aware of the Users Forum; whether their company planned to send a representative; and whether the customer needed or wanted more information on the Users Forum. Prince faxed the names and addresses of respondents indicating an interest in the Users Forum to TCS, which sent promotional materials immediately.

Prince surveyed 315 customers: 161 users and 154 managers. Prince predicted that 115 people would attend the forum. Actual Users Forum attendance was 139.

On customer service, 34 percent of respondents gave TCS a 7, the highest point on the rating scale. Yet respondents also offered that they wanted shorter response time and longer operating hours for telephone support staff, including Saturday access. TCS CEO Gordon said, “We have redeployed some of our people, expanded Saturday coverage, and instituted a beeper system to increase our responsiveness.”

TCS received its lowest scores on ease of use, with 60 percent of respondents giving it a 5 or higher on the 7-point scale, while 16 percent refused to answer. The research confirmed anecdotal evidence and reinforced internal initiatives to improve ease of use.

Fully 84 percent said they would recommend the TCS system to colleagues, with 16 percent indicating they were too new to the software to form an opinion. TCS plans to use this endorsement to attract new users.

“The positive survey results created tremendous esprit de corps for the whole staff,” claims Gordon. “We were able to identify these concerns ahead of our Users Forum and develop appropriate responses. All in all, the survey told us we’re on the right track—and that alone justified our investment in the research.”

Questions

1. Build the management-research question hierarchy.

2. Discuss the communication methodology chosen.

3. Develop the preliminary analysis plan.

4. How would you deal with the 16 percent of the sample who were new to the software?

5. Discuss the advantages and concerns of incorporating or closely linking marketing activities with research activities.

Source: http://www.quirks.com/CGI-BIN/SM40i.exe?docid=3000:58911&%70assArticleID=409. Used with permission of Pamela S. Schindler and Donald R. Cooper. ©

4. McDonald’s Tests Catfish Sandwich

Nashville, Tennessee—McDonald’s Corp. is trying to hook customers in southern test markets, including one in Kentucky, on a new catfish sandwich.
The chain is serving its newest sandwich in Bowling Green, Kentucky; Memphis, Chattanooga, and Jackson, Tennessee; Huntsville, Alabama; Jonesboro, Arkansas; and Columbus, Tupelo, Greenville, and Greenwood, Mississippi, said Jane Basten, a marketing specialist for McDonald’s in Nashville.
The sandwich consists of a 2.3-ounce catfish patty, lettuce, and tangy sauce served on a homestyle bun.
The company will evaluate the sandwich based on sales and supply availability after a six-week ad campaign ends in mid-April. “The advertising will be similar to what we’re doing right now with the grilled steak sandwich,” Basten said. “We will promote it to the fullest and see what happens.”
The Catfish Institute, an industry promotion association based in Belzoni, Mississippi, is supplying the catfish. Catfish Institute director Bill Allen said catfish farmers, processors, and marketers are “very excited about this prospect for our industry. This is super good news. “But we don’t want to get our hopes up too much and start thinking this is going to be our salvation, because we already have a viable industry.” Allen said that catfish firms that remember earlier tie-ups with major restaurant chains such as Church’s Fried Chicken are cautiously optimistic about the McDonald’s deal.

QUESTION:
1. The management team for new product development was interested in assessing the relevancy of the chosen test markets to the three states designated for rollout if the test market was satisfactory (Tennessee, Alabama, and Georgia).
a. What are your conclusions about the representativeness of the test cities to the designated rollout states?
b. What secondary data should you present to support your conclusions? Where will you obtain this data?

5. Observational studies

Assume you are the manufacturer of modular office systems and furniture as well as office organization elements (desktop and wall organizers, filing systems, etc.). Your company has been asked to propose an observational study to examine the use of office study space by white-collar and managerial workers for a large insurance company. This study will be part of a project to improve office efficiency and paperwork flow. It is expected to involve the redesign of office space and the purchase of new office furniture and organization elements.

1. What are the varieties of information that might be observed?

2. Select a limited number of content areas for study, and operationally define the observation acts that should measured.

3. Develop a checklist to be used by observers in the previous study.

a) Determine how many observers you need and assign two or three to a specific observation task.

b) Compare the results of your group members’ checklists for stability of recorded perceptions.

6. Sturgel Division
“Now that I write it all down, I see we have changed a lot!” mused Martha as she put the finishing touches on her annual status report. “In fact,” she went on, “the name Information Services indicates most of the changes. We used to be Information Systems.”

Martha was Information Services (IS) manager for the Sturgel Division of a major manufacturing company. Sturgel developed and manufactured (mostly small) household appliances on a 230-acre site in the southeast part of the United States. While she had managed the IS department for only 15 months, Martha had been part of the department for nine years, all at the same location.

The late 1980s and early 1990s were a time of many changes for the information systems departments in most companies, including the Sturgel Division. Technological change drove most of the organizational change: The price performance of most computer equipment had improved by compound rates of perhaps 30 percent per year for several decades. This meant that many companies perceived that they no longer needed a “computer center.” Individual users could simply do whatever they needed to do on a desktop computer in their own offices.

Martha made a rough list of the major influences she’d dealt with over the last few years, calling them “IS Transitions.” They included:

• From running the computer to providing information services to the company.

• From “owning” the data to consulting with user departments about use of the company’s information resources.

• From emphasizing mainframes to emphasizing terminals, personal computers, and telecommunications. Indeed, sometimes it seemed that the telephone, word processing, and clerical and library departments had all been combined.

• From developing the company’s applications to advising users on how to develop their own applications.

Each transition had been difficult, for each required its own combination of hardware, software, data, procedures, and people, most of which were different at the end of the transition.

Currently, the Information Services department had four sections, each with a manager who reported to Martha:

1. Systems operations. The systems operations section ran two mainframe computers and one minicomputer that operated as a telecommunications “node” and link to other divisions and corporate headquarters. This section provided operator support for three shifts of operation, as well as systems maintenance, operating system updates, and so forth. The vendor of the mainframes handled hardware maintenance.

2. Application development. The application development section really should be renamed, Martha thought. Most of its activities involved database design and maintenance, though two small groups developed financially oriented applications and maintained several software packages aimed at serving the (fairly small) engineering staff at Sturgel.

3. PC services. The PC services section developed division standards for personal computers at Sturgel and consulted with users who wanted PCs of their own. The users purchased PCs out of their own budgets, but they were required to meet certain hardware and software standards as set by IS. This section also evaluated general-purpose software, such as word processing and spreadsheet packages, and recommended to users when they should switch to a new version or release. They also did a fair amount of “hand holding” for users who had difficulty in developing their own applications.

4. Telecommunications. The telecommunications section maintained the local area network at the Sturgel site as well as links to corporate headquarters and other company divisions. This section also advised users on such activities as links to legal databases and information retrieval services, though that activity, thought Martha, might really fit better in the application development section.

Martha felt good about the organization in general. While it had been a bit of a scramble to develop an organization that could deal with the fast pace of technological change in the computer field, she felt they had more or less done it. One part of it still made her nervous, though. The biggest theme in all the changes, from an IS perspective, was the change from computing and running computers to serving large numbers of people directly.

She felt they were providing good service, but she had no regular way of knowing how her users perceived it. She thought she should consider a regular survey of her users and their perceptions of service received from the IS department. That way, she figured, she would be in a position to have data to back up her informal sense, and she would (presumably) learn about changes in those perceptions (for better or for worse) more quickly.

QUESTION:
1. Martha would like you to develop an appropriate research design.

a. What kind of survey should Martha run, or should she?

b. How should it be administered?

c. What kinds of forms, questionnaires, or other survey approaches should bedeveloped?

7. TRANSIT DISPLAY ADVERTISING INC.

EXPERIMENT IN TRANSIT ADVERTISING

Transit Display Advertising, Inc.(TDA) specializes in outdoor advertising, especially transit ads on cabs and buses. TDA works in two separate capacities:1) it designs and places advertising for clients and 2) it represents a number of metropolitan taxi and bus companies in selling advertising space to other advertisers.

Bob Martin was the general manager for Florida of TDA. A former general manager for several South Florida radio stations, Martin had always been bothered that he had difficulty in showing the effectiveness of bus ads compared to other media. In radio, for example, advertising research was often conducted to show the media’s effectiveness in reaching various demographic targets. As Martin often said, “I’ve been in radio all of my life, and I’m used to numbers.”

The Research Project

Martin was familiar with effectiveness studies which were conducted for billboard advertising. Some used a before/after interview technique for specific test billboards in certain locations. Martin decided to adapt this research technique to bus advertising. Basically, he planned to first determine people’s awareness or knowledge of a subject, run ads on buses for a specified period of time, and then determine their awareness after the campaign. The positive change would show how bus advertising could be effectively utilized.

Rather than use a product or an advertising slogan for the test, Martin went to an encyclopedia and searched for a suitable subject. He thought that it would be good to use something unambiguous and simple, such as questions concerning world capitals or history. Martin turned to the section on American presidents and found his answer. Martin commissioned Advanced Market Research (AMR) to conduct telephone interviews in the Miami area with persons eighteen or older, in category groups which reflected the general population breakdown in that area. In addition to pinpointing the demographics, AMR was to ask two questions: “Who was the 30th President of the U.S.” and, “Who was Eisenhower’s vice-president?” The first (Calvin Coolidge) was Martin’s test question and the second (Richard Nixon) was a control question.

Pre-Test Results

In June AMR interviewed 1,524 persons ages eighteen and older. Roughly four percent of the persons interviewed knew that Coolidge was the 30th President. About twenty-six percent knew that Nixon was the correct answer to the other question.

Next, Martin placed a somewhat cryptic twelve-foot long banner on 130 buses in the greater Miami area in early July. It read, “Calvin Coolidge, 30th President of the U.S.” (See Exhibit1) Within a few days after the “ads” were put on buses, however, the local media gave the story extensive coverage. Stories, including pictures of the buses, appeared in local newspapers on the front page. Several television stations also picked up the story. Concerned about the immediate impact of the media coverage on the test, Martin commissioned AMR to do an intermediate study covering the last two weeks of July. AMR found that the results had not changed significantly. As Martin explained, “Either people didn’t read the papers or watch TV, or they forgot about it right away.” The Coolidge message was left on the buses for about six weeks.

Post-Test Results

In late August, AMR interviewed 1,184 persons asking the same questions. Among men, those knowing that Coolidge was the 30th U.S. President had increased form 4.3 percent to 13.2 percent. Among those 18 to 34, the percentage increased from 0.4 to 9.8. The most impressive gain was among men ages 35 to 54. That figure went form 1.4 percent to 16.3 percent two months later.
Among all women, there was a 100 percent increase, going from 3.6 to 7.1 percent. Among females eighteen to thirty-four, the results went form 0.7 percent to 6.1 percent. In the thirty-five to forty-four group, the pre-ad result was 1.7 percent versus 7.0 afterward. The percentage of correct answers in the fifty-five plus category dropped from 4.2 percent to 2.8 percent.

Overall Results

Inexplicably, the results of the control question also increased from 25.6 percent overall to 37.3 percent. Martin had overlooked the fact that it had been the fifth anniversary of the Watergate affair and that Richard Nixon’s name had frequently been in the news lately.
Overall, Martin was pleased with the results. Now, he was prepared to present quantitative evidence of the effectiveness of transit advertising. He was quoted as saying that the project “has been about as successful as I could wish it to be. It has really raised our level of believability.”

Questions

1. How would you describe this test as an experimental design?

2. What major factors could affect the validity of this test?

3. How could you explain the reduction in correct recall among older women?

4. Overall, do you agree with Martin concerning the believability of the results?

8. Amos Brown Chevrolet of Reno

The Amos Brown Chevrolet dealership, located in Reno, Nevada, wanted to know how people who intended to buy a new American-made automobile in the next 12 months view their purchase. The owner, Amos Brown, called the marketing department at the University of Nevada-Reno and arranged for a class project to be taken by Professor Thomas Clary’s undergraduate marketing research students. Professor Clary had a large class that semester, so he decided to divide the project into two groups and to have each group compete against the other to see which one designed and executed the better survey.

Both groups worked diligently on the survey over the semester. They met with Mr. Brown, discussed the dealership with his managers, conducted focus groups, and consulted the literature on brand, store, and company image research. Both teams conducted telephone surveys, whose findings are presented in their final reports.

Professor Clary offered to grant extra credit to each team if it gave a formal presentation of its research design, findings, and recommendations.

1. Contrast the different ways these finding can be presented in graphical form to the Amos Brown Dealership management group. Which student team has the ability to present its findings more effectively? How and why?

2. What are the managerial implications apparent in each team’s findings? Identify the implications and recommendations for Amos Brown Chevrolet as they are evident in each team’s findings.

Findings of Professor Clary’s Marketing Research Teams

Team One’s Finding for Amos Brown Chevrolet
IMPORTANCE OF FEATURES OF DEALERSHIP IN DECIDING TO BUY THERE
|Feature |Percent |
|Competitive prices |86% |
|No high pressure |75% |
|Good service facilities |73% |
|Low-cost financing |68% |
|Many models in stock |43% |
|Convenient location |35% |
|Friendly salespersons |32% |

IMAGE OF AMOS BROWN CHEVROLET DEALERSHIP: PERCENT RESPONDING “YES”
|Competitive prices |45% |
|No high pressure |32% |
|Good service facilities |80% |
|Low-cost financing |78% |
|Many models in stock |50% |
|Convenient location |81% |
|Friendly salespersons |20% |

Team Two’s Findings for Amos Brown Chevrolet
|Feature |Importance*a |Rating*b |
|Competitive prices |6.5 |1.3 |
|No high pressure |6.2 |3.6 |
|Good service facilities |5.0 |4.3 |
|Low-cost financing |4.7 |3.9 |
|Many models in stock |3.1 |3.0 |
|Convenient location |2.2 |4.1 |
|Friendly salespersons |2.0 |1.2 |

IMPORTANCE AND IMAGE OF AMOS BROWN CHEVROLET DEALERSHIP a Based on a sevent-point scale where 1 = unimportant and 7= extremely important b Based on a five-point scale where 1= poor and 5= excellent performance

9. BBQ Product Crosses over the Lines of Varied Tastes

Rich Products Corp. is hoping its frozen barbecue will appeal to the wide tastes in its narrow market, but it realizes consumers will need a nudge in that direction.
Variable Label
REP The dealer’s support for replacing the existing warranty appeals system with a mediator system (1 = “Strongly agree,” 3 = “Neither agree nor disagree,” and 5 = “Strongly disagree”). USE the number of times the dealer used the appeals process (3 = 3 or more).

Enter Ruby Taylene Dodge, waitress down at the Port-O-Rama and major figure in the company’s marketing campaign for its new product, Rich’s Southern Barbeque. Barbecue is a regional delicacy; it varies in taste from county to county throughout the Southeast. According to Joe Tindall, the company’s product development manager of new products, Rich Products (Buffalo, New York) had to develop a tangy product to appeal to varied tastes and had to persuade consumers they’d like it.

To cross over regional and local differences in the six-city market in the Southeast, Long, Haymes & Carr Advertising (LH&C), Winston-Salem, North Carolina, has launched a series of 30-second TV ads called “Please Don’t Tell ’Em Ruby Sent You.” “The fictitious Ruby is supposed to give the product authenticity without trying to compete with barbecue restaurants or stands,” said Don Van Erden, vice president/ management supervisor, LH&C.

“Our research told us that no one has more rapport and credibility with the barbecue-eating public than the real-life barbecue waitress,” Van Erden said. “In Ruby, we have a vivid persona who’s believable because she’s based on real barbecue waitresses we’ve observed.” Rich Products hopes Ruby will reach all consumers with her friendly Southern accent and down-home sincerity.

“I’m a loyal employee of the Port-O-Rama, but my real true love is Rich’s Frozen Barbeque,” Ruby says in one spot. In another she’s wearing a disguise. “I can’t just go to my grocer’s freezer for Rich’s Barbeque,” she says. “I’ve got a career at the Port-ORama to consider.”

She praises the product in all the spots but, fearful of losing her job, warns viewers, “Just please don’t tell ’em Ruby sent you.” The microwavable barbecue entrees were test marketed last year in Nashville, Tennessee, Little Rock, Arkansas, and the Alabama cities of Birmingham, Huntsville, Montgomery, and Tuscaloosa. “That’s the market now, but expansion into other areas is planned,” Tindall said.

Question:
1. What measurement and scaling issues should be considered when developing a study to measure consumers’ attitudes toward barbecue in general and, specifically, Rich’s Southern Barbeque?

2. Assume Rich’s wanted to test people’s preference for its barbecue versus the other leading brands (of which there are five). What would you recommend to measure these preferences?

3. What measurement and scaling issues should be considered when developing a study to measure the effectiveness of “Ruby” as a character spokesperson for Rich’s Barbeque?

10. Inquiring Minds Want to Know—NOW!

Penton Media, a publisher of such business magazines as Industry Week, Machine Design, and Restaurant Hospitality, was experiencing a decline in use of publication reader service cards. This postcard-sized device features a series of numbers, with one number assigned to each ad appearing in the publication. Readers circle the advertiser’s number to request product or service information by mail. Cards are used to track reader inquiries stimulated by advertising within the magazine. “By 1998 there was a growing belief in many quarters that business publication advertising was generating fewer leads than in the past,” shares Ken Long, director of Penton Research Services. “Knowing whether or not this is true is complicated by the fact that many companies don’t track the source of their leads.” This belief, however, could ultimately lead to lower advertising revenues if alternate methods of inquiry stimulation went untracked.

Penton started its research by comparing inquiry response options offered within September issues of 12 Penton magazines, including Industry Week. Ads were drawn from two years: 1992 (648 ads) and 1997 (690 ads). The average number of response options per ad was 3.3 in 1992, growing to 4.1 in 1997. More than half of 1997 ads offered toll-free telephone numbers and fax numbers. “Two inquiry methods that are commonplace today, sending e-mail and visiting an advertiser’s Internet website, were virtually nonexistent in 1992,” noted Long. Not a single 1992 ad invited readers to visit website and just one ad listed an e-mail address. Website addresses were found in three of five (60.9 percent) 1997 ads, with e-mail addresses provided in 17.7 percent of ads. Today, many websites contain a “contact us” feature that generates an e-mail message of inquiry. In 1997, advertisers were including their postal mailing address only 55.5 percent of the time, compared with 69 percent in 1992 ads.

Penton pretested a reader-targeted mail questionnaire by phone with a small sample drawn from its database of 1.7 million domestic subscribers. A second pretest, by mail, involved 300 subscribers. Penton mailed the finalized study to 4,000 managers, executives, engineers, and purchasing agents selected from the U.S. Penton database. The survey sample was constructed using stratified disproportionate random sampling with subscribers considered as belonging to one of 42 cells (seven industry groups by six job titles). A total of 710 completed questionnaires were received, with 676 of the respondents indicating that they were purchase decision makers for their organization. Penton analyzed only the answers of these 676 buyers. Data were analyzed by weighting responses in each cell by their percentage makeup in the overall population. The overall margin of error for the survey was ± 4 percent at the 95 percent level of confidence. In-depth follow-up telephone interviews were conducted with 40 respondents, to gain a deeper understanding of their behavior and attitudes. Almost every respondent (97.7 percent) had contacted at least one advertiser during the past year. Newer methods of making inquiries—Web visits, fax-on-demand, or e-mail—were used by half (49.1 percent) of the buyers surveyed. But a look ahead shows the true impact of information technology. Within the next five years, 73.7 percent expect to respond to more ads by sending e-mail to the company. In addition, 72.2 percent anticipate visiting an advertiser’s website, and 60 percent expect to increase their use of fax-on-demand. Three out of five purchasing decision makers have access to the Internet, and 74.3 percent of those without Internet service expect to have it within the next five years. Seven of 10 (72.4 percent) respondents plan to use the Internet to research potential suppliers, products, or services during the next five years, compared to 33.1 percent using it for that purpose during the past year.

Findings revealed that the need for fast response and the need for information on product availability and delivery are influenced by the following:

1. Time pressures created by downsizing of the work force and demands for greater productivity.

2. The fast pace of doing business.

3. Cost considerations.

Behavior varied depending on immediacy of purpose. When buyers have an immediate need for a product or service, telephone contact is the inquiry method of choice.

Of the respondents, 79.5 percent reported that they had called a toll-free number in the past year for an immediate need, while 66.1 percent had called a local number, and 64.7 percent had called a long-distance number. When the need for a product or service is not immediate, buyers are more likely to use the mail. Among respondents, 71.4 percent reported they had mailed a reader service card in the past year for a nonimmediate need, and 69.3 percent had mailed a business-reply card to an advertiser. “A new paradigm is emerging for industrial purchasing,” concludes Long. “Buyers are working in real time. They want information more quickly and they want more information.”

Question:

1. Build the management-research question hierarchy.

2. What ethical issues are relevant to this study?

3. Describe the sampling plan. Analyze its strengths and weaknesses.

4. Describe the research design. Analyze its strengths and weaknesses.

5. Critique the survey used for the study.

6. Prepare the survey for analysis. Set up the code sheet for this study. How will this study be set up to be tabulated by a statistical analysis program like SPSS?

7. Assume you are compiling your research report. How would you present the statistical information within this case to the Industry Week decision maker, the manager who must decide whether or not to continue to publish reader service cards?

8. Assume you are compiling your research report. What are the limitations of this study?

9. Assume you are the decision maker for Industry Week. Given the declining value of the reader response card to subscribers, originally designed as a value-enhancing service to IW readers and advertisers alike, what further research might be suggested by the findings of this study? Or do you have sufficient information to stop the use of reader response cards in Industry Week?

Cover Letter and Questionnaire for Mail Survey

|Could we ask a favor of you? |
|We are conducting a nationwide survey of executives to help companies better understand and respond to your |
|requests for information. |
|Your name has been selected as part of a relatively small sample, so your reply is vital to the accuracy of the study |
|findings. All individual responses will remain completely confidential, with answers combined and presented in statistical |
|form only. |
|We would be grateful if you could take a few minutes to respond to this survey. |
|A postage-paid envelope is enclosed for your convenience. |
|We look forward to your reply! |
|Cordially, |
|Director of Research |
|P.S. To ensure a correct entry in the random drawing for the hand-held color TV, please make any necessary changes to your |
|mailing label. |
|PLEASE TURN PAGE . . . |

[pic]

[pic]
[pic]

[pic]

11. Can This Study Be Saved?

“What’s troubling me is that you can’t just pick a new random sample just because somebody didn’t like the results of the first survey. Please tell me more about what’s been done.” Your voice is clear and steady, trying to discover what actually happened and, hopefully, to identify some useful information without the additional expense of a new survey.

“It’s not that we didn’t like the results of the first survey,” responded R. L. Steegmans, “it’s that only 54 percent of the membership responded. We hadn’t even looked at their planned spending when the decision (to sample again) was made. Since we had (naively) planned on receiving answers from nearly all of the 400 people initially selected, we chose 200 more at random and surveyed them also. That’s the second sample.” At this point, sensing that there’s more to the story, you simply respond, “Uh huh . . .” Sure enough, more follows:

“Then E. S. Eldredge had this great idea of following up on those who didn’t respond. We sent them another whole questionnaire, together with a crisp dollar and a letter telling them how important their responses are to the planning of the industry. Worked pretty well. Then, of course, we had to follow up the second sample as well.” “Let me see if I understand,” you reply. “You have two samples: one of 400 people and one of 200. For each, you have the initial responses and follow-up responses. Is that it?” “Well, yes, but there was also the pilot study—12 people in offices downstairs and across the street. We’d kinda like to include them, average them, with the rest because we worked so hard on that at the start, and it seems a shame to throw them away. But all we really want to know is average spending to within about a hundred dollars.”

At this point, you feel that you have enough of the background information to evaluate the situation and to either recommend an estimate or an additional survey. Exhibit
C-CAN 1–1 offers additional details for the survey of the 8,391 overall membership in order to determine planned spending over the next quarter.

EXHIBIT C-CAN 1–1 Methodology Details
[pic]

QUESTION:
1. Was drawing a second sample a good idea? Explain.
2. Were the follow-up mailings a good idea? Explain.
3. Which of the results are useful? Are these data sufficient to solve the management dilemma or is further study needed?

12. Healthy Lifestyles
The Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia, is the government agency responsible for disease-related issues in the United States. The CDC coordinates efforts to counteract outbreaks of diseases and funds a variety of medical and health research studies. The CDC also serves as a central clearinghouse for health-related data.

The CDC conducts the annual Behavioral Risk Factor Surveillance Survey. The survey measures a whole series of lifestyle characteristics that relate to health and longevity, such as smoking and use of seat belts. The survey compiles data on a stateby-state basis. Not all states are surveyed.
The data set from the 1990 Behavioral Risk Factor Surveillance Survey is on the accompanying CD in the file named HEALTHY. All numbers are percentages, and asterisks indicate the missing data for that state.
[pic]
Your task is to prepare a summary of these data. Your report is to be issued to major news organizations, such as the Associated Press, and will appear in major newspapers around the United States. For this reason, it would be inappropriate to use technical jargon in your report.
Your boss has suggested a few general ideas about what is likely to appeal to your target audience. As you study the data, you might find other things worth including.

Questions
1. Report any interesting (i.e., unexpected, humorous, or odd) differences between states.
2. Devise a weighted index of all seven lifestyle variables. The weighted index is to serve as an overall or composite measure of healthy lifestyles. Apply your weight to the states of Minnesota, Florida, and California as an example of what your weighted index shows.
3. Discuss any noteworthy limitations of the survey or data set.
13. Violence on TV
As the general manager of KTDS, the NBC affiliate in Tidusville, Oklahoma, Chris has a range of responsibilities that include programming, personnel, advertising, and public relations. His least favorite activity is responding to customer complaints. Unfortunately, there’s been an unusually large number of complaints in the past few months from viewers and advertisers alike.

Most of the recent comments are objections to the level of violence on KTDS programs. Chris is sensitive to this issue because he has observed a gradual increase in violence on TV over the past 20 years. Chris really prefers the old-time movies in which dirty deeds were neatly sanitized and violent crimes occurred behind the scenes. He is sympathetic to the recent callers.

Nonetheless, he’s in a tight spot. Chris knows that small, vocal groups do not necessarily represent the population at large. People who feel strongly about an issue are likely to speak out, while those who are content tend to remain silent. While the recent callers have denounced the level of violence on KTDS shows, Chris knows he must understand and serve all of the KTDS viewers.

Chris has a suspicion about the source of the recent calls. Four months ago, a flamboyant politician announced his candidacy for mayor. This candidate has received a great deal of air time on the local news due, in part, to his impassioned outbursts. Some people love him, others despise him, but almost everyone tunes in to the evening news in hopes of catching the latest controversy. One continuing theme of his platform is violence in America in general and violence on KTDS in particular. Over the past four months, the candidate has suggested that those opposed to violence in the media “let their voices be heard.” Chris suspects that this fellow has inspired a large portion of the recent complaints to KTDS.

Chris needs to sort all of this out. To understand the views of all KTDS patrons, he has commissioned you to undertake an opinion poll. A survey has been designed, and 94 telephone survey responses have been compiled. The results of the survey reside in the file named VIOLENCE on the accompanying CD and the actual survey included in this case. You’ll need a copy of the survey to understand the numerical codings in the data set.

The survey design and data set compilation were undertaken by Ann Lee Bailey, an MBA student at the University of Colorado at Denver. The scenario has been altered to preserve the anonymity of the survey respondents.

The survey was done over a three-week period in October. Two hundred phone calls were made, and 106 people declined the invitation to participate in the survey. Of the 94 participating respondents, 2 were offended by the question of income and refused to answer that particular question. Nonresponses are indicated in the data set by an asterisk. A random selection of phone numbers from the Tidusville phone book was used to select the sample. Chris needs a report as soon as possible.

Violence on TV Survey
1. Gender (0) male (1) female
2. Age (1) under 20 (2) 20–30 (3) 31–40 (4) 41–50 (5) over 50
3. Marital status (0) married (1) single or divorced
4. Do you have children at home? (0) yes (1) no
5. Household income
(1) under $20,000 (2) $20,000–$40,000 (3) $40,00–$60,000 (4) over $60,000
6. Education
(1) high school (2) some college (3) college graduate (4) graduate school
7. How many hours per week do you watch TV?
(1) 0–7 (2) 8–14 (3) 15–21 (4) 22–28 (5) 29–35 (6) 36–42 (7) 43 or more
8. In your opinion, how violent are most TV programs?
(1) much too violent (2) somewhat too violent (3) violent
(4) a little violent (5) not very violent

Questions
1. What is Tidusville’s perception of the level of violence on KTDS?
2. Does one’s perception of violence on TV vary with gender, age, marital status, income, or education?
3. Do parents with children at home have a different tolerance for violence than those without children at home?
4. Do viewers who spend a lot of time watching TV become desensitized to violence?

Similar Documents