There are ten essential steps to implementing an effective 360 degree feedback process. These ten steps of implementing a 360 degree feedback are discussed in detail

360 Degree Feedback Implement360 Degree Feedback: Setting Goals

Why does your organization want to implement a 360 degree feedback process? To ensure that all of the stakeholders are in agreement on the purpose, it is important to set goals for conducting the assessment. What is the intended purpose of the intervention? Is it for purely developmental purposes or will it be used for performance appraisal (decision-making) purposes? Although feedback collected for decision making can be used for developmental purposes, processes designed for decision making are usually not appropriate for use as developmental tools, and vice versa. The type of decisions that will be made with the results will have a significant impact on how the process is carried out.

We recommend that the goal of the 360 degree feedback process be to develop the leadership capabilities of your managers. To meet this goal, you will need to implement a process that meets confidentiality and anonymity requirements, select an instrument that will provide accurate and meaningful feedback to your participants, and provide feedback to the participants. The organization will also need to provide support to the participants in carrying out their development plans. These requirements are discussed in detail in the following sections.

360 Degree Feedback: Assessing Readiness

Before going further with the process, it is important to determine whether your organization is ready to implement 360 degree feedback. To determine the readiness of the organization, answer the following questions:

  • What are the expected outcomes? What does the organization expect to get out of the process? As stated before, the most likely outcome is the development of your managers—the identification of their strengths and weaknesses for further development. If the organization is hoping to get valid performance data that can be used for decision-making purposes, then the expected outcomes may be less likely.
  • Who is the target population? Often 360 degree feedback is used with managers with ratings being provided by direct reports. However, it also can be used for individual contributors, with ratings being gathered from the boss, peers, and maybe customers. You must decide who the target population will be. Often a good strategy is to start with middle-level managers, then expand to other levels of the organization.
  • Do you currently have an underlying management model? To increase the chances of success, the competencies on the feedback instrument should be linked to the underlying management model of the organization. The underlying management model is simply those leadership competencies that are considered important to your organization. If the instrument does not reflect your management model, you may not be assessing competencies that are important for success in your organization.
  • Does the 360 degree feedback process fit in the organizational context? Will the culture of your organization support the rating of managers by their direct reports? If not, a great deal more effort will be required to prepare the organization for the implementation. To ensure the success of the implementation, upper management must demonstrate strong support for the process. Full and open disclosure of the process must be communicated at all levels of the organization. Formal training for participants and raters will also help to garner support for the process.
  • Is there trust and openness in the organization?For the process to be successful, there first must be a culture of trust and openness in the organization. If such a culture does not currently exist in your organization, we recommend that, before implementing 360 degree feedback, you consider some type of intervention such as training that will increase the level of trust and openness among your employees.
100% Customizable Evaluation Form Template. Buy for $5


360 Degree Feedback: Designing the Process

The next step involves planning and designing the process. You will need to decide who will fill the necessary roles, how and when you will start the process, how a pilot test will be conducted, and how the support of upper management will be ensured. Following are six major steps in planning for 360 degree feedback implementation:

  1. Get upper management buy-in and direction.
  2. Establish a steering committee.
  3. Put a design team in place.
  4. Develop a timeline.
  5. Make a contingency diagram.
  6. Develop preventive strategies.

Following are some common concerns that upper management may have about the 360 degree feedback process. You should be prepared to answer these questions:

  • How does 360 degree feedback improve productivity?
  • Does 360 degree feedback reward “nice” people who don’t work hard?
  • Do some people refuse to change after receiving the feedback?
  • Is there really much difference between 360 degree feedback and supervisor appraisal?
  • Isn’t it too expensive and time consuming?
  • Will employees use deliberate manipulation, sabotage, and unresponsiveness when the data is used for decision-making purposes?

When designing the process, it is also important to decide how the following issues will be handled:

  • What type of behavior change do you want to encourage?
  • Are you looking to change specific management-related behaviors, such as delegation, or do you want to change more global leadership behaviors, such as acting systemically?
  • Does the process have integrity?
  • How will anonymity and confidentiality be ensured?
  • Who will “own” the data? Will only the participant and the feedback coach see an individual’s data, or will others in the organization, such as the boss and HR staff, also see it?

360 Degree Feedback: Selecting or Designing a Tool

Next, you will need to decide what type of 360 degree feedback tool will work best in your organization. In this section, we will describe different types (off-the-shelf, online, and customized instruments), their features, and considerations in their selection.

Off-the-Shelf Tools

These instruments have been created by vendors to be sold directly to individuals or organizations that wish to implement 360 degree feedback. The availability of off-the-shelf instruments is increasing at a rapid pace. If you decide to use such an instrument, your first task is to learn what instruments are available in order to choose the best possible tool for your organization. A good way to familiarize yourself with what is available is to consult Feedback to Managers (Leslie & Fleenor, 1998), a CCL publication that provides basic descriptive and technical data on 360 degree feedback instruments available for use for leadership development. You may also want to take a look at Choosing 360 (Van Velsor, Leslie, & Fleenor, 1997), which provides a step-by-step process for selecting an off-the-shelf 360-degree assessment tool.
You may also want to consider the following questions when selecting an off-the-shelf instrument:

  • Do the items and scales make sense to you (face validity)?
  • Are the competencies relevant to the change process?
  • Are competencies related to your underlying management model?
  • Will the data be meaningful to your managers?
  • Will it be consistent and accurate?
  • Are the behaviors assessed amenable to change?
  • How much experience and expertise does the vendor have with the instrument, target audience, and subject matter?
Online Assessments

If you choose an online tool, additional readiness assessment may be necessary. Online 360 degree feedback consists of a survey loaded at a Web site, which can be accessed by participants and raters so they can complete their ratings via the Web. After receiving an e-mail with instructions for completing the assessment, the participant uses a standard browser to access the Web site. The participant enters a user ID and password and then provides a list of raters who will be asked to provide feedback. The selected raters are then sent e-mails, requesting that they go to the Web site and complete the assessment instrument. The ratings are collected online and collated and assembled into feedback reports. In an online process, paper communication is replaced by e-mail. The forms are not distributed; they are collected at the Web site as raters complete them. Reminder e-mails are sent to raters who are late in submitting their ratings.

There are several advantages to using online assessments. Raters can receive training through an interactive online training module before they complete their ratings. Because raters interact directly with the system, it can respond directly when their ratings fall outside predefined limits. For example, the system can look for cases where a rater gives the same rating (all 5s, for example) to all the items on the instrument, which may have been caused by the rater’s rushing through the assessment without reading the items. The system can respond with a warning and suggest that the rater go back and find some items that may not truly be a 5.

Another approach possible with online systems is to provide raters with feedback reports indicating how well they agree with others’ ratings of the same participant. The feedback gives raters a frame of reference, indicating whether their ratings tend to be more severe or more lenient than the other raters’ assessments.

Before participants receive their feedback reports, they can complete an online training module that prepares them to accept the feedback. In addition, participants can communicate with a virtual feedback coach online to help them prepare a development plan. The online system can link the participants to an interactive development planning system that guides them through the steps of identifying development needs and designing a plan to address them.
Online assessment and feedback are not meant to substitute for face-to-face discussions with a feedback coach. The intent of using online assessment is not to remove all human interaction from the process. The intent is to reduce the resources that the organization has to expend on the administrative aspects of 360 degree feedback so that these resources may be redirected toward activities that truly add value.

Customized Instruments

Another alternative to using an off-the-shelf instrument is to design a survey specifically for your organization. Such a survey is referred to as a customized instrument. An advantage to using a customized instrument is that the questions and scales are designed to measure competencies that are important to your organization. Although this may be the best approach, customizing a 360 degree feedback instrument can be a difficult and daunting task. First, the instrument must be designed so that it is a reliable and valid measure of the competencies of interest. Reliability refers to the consistency or stability of the data collected by the instrument. Validity refers to the extent to which the instrument measures what it is supposed to measure and to the appropriateness of the ways scores from an instrument are used.

The customized instrument, therefore, must be designed to be a stable (reliable) measure of the competencies over time. It also must be a good (valid) measure of the competencies that it is designed to assess. The bottom line is that you don’t want to give the participants feedback that is based on unstable and inaccurate data. Designing a reliable and valid survey requires the involvement of someone who is an expert in instrument development. If you don’t have this expertise in your organization, you should consider obtaining the services of a consultant who has the necessary training and experience. A doctoral degree in psychology, educational measurement, or a related field is usually required.

A variation is to use a customizable instrument that is offered by a vendor. Usually, these instruments allow you to select competencies from a list of scales that have been tested for reliability and validity by the vendor.

360 Degree Feedback: Choosing and Preparing Participants

The organization should choose participants according to the business need for the assessment. People at any level of the organization can benefit from 360 degree feedback, and individuals need different feedback at different times during their careers. For example, new employees need feedback on which skills are important to their jobs, while midlevel employees require feedback on their strengths and developmental areas. As employees move up the organizational ladder, they need more feedback on their ability to create a vision for the organization’s success.

Some type of orientation to the 360 degree feedback process should be provided to the participants.

360 Degree Feedback: Preparing the Organization

The next step is to prepare the organization for the process. Frequently, organizations are ill prepared for what lies ahead, especially if they have not experienced 360 degree feedback before or if they have experienced a less-than-successful implementation. In this step, you will conduct a briefing for the executive team and the raters. A rater is a respondent who is asked by the participant to provide feedback on the 360 degree feedback instrument. This person should be someone who has worked with the participant during the past year and who can provide appropriate feedback.

Accuracy of the assessment process is a key determinant of the successful implementation. The greatest threats to accuracy are bias and error on the part of raters. You can minimize bias and error by training the raters to be accurate and honest so that feedback recipients can have enough confidence in the data to use it in personal and professional self-development programs. This can best be accomplished through a formal rater-training program.

One aspect of this training consists of exposing raters to the scales and items before they are asked to provide responses. Typically referred to as competency scale training, this type of training can increase rater accuracy. Another type of training, frame-of-reference training, involves having each scale calibrated by using a list of behaviors that correspond to performance levels in the organization (that is, what constitutes effective and ineffective behaviors in the organization). Raters can also be trained in behavioral observation. One way to accomplish this is to ask each rater to keep a journal that documents behaviors. At CCL, we teach and practice the situation-behavior-impact (SBI) method. This simple feedback method keeps raters’ comments relevant and focused to increase their effectiveness. With SBI, the rater describes the situation in which he or she observed the participant, the behavior observed, and the impact of that behavior on the rater and others present in the situation.

In addition, rater training can help prevent other types of errors. Specifically, training can help reduce recency errors (ratings based on most recent experiences) and halo errors (generalizing ratings across items—for example, rating an individual high on all items because she is outstanding on one particular item).

Two components that help to ensure quality ratings are using raters who have the ability to accurately assess and evaluate the behaviors measured and preparing these individuals to provide accurate ratings. The raters are likely to have perceptions of the 360 degree feedback process that may affect the results:

  • Perceptions of the instrument
  • Thoughts about how and why they were chosen as raters
  • Perceptions about the extent to which they have anonymity and accountability
  • Expectations about the outcomes of the process
  • Perceptions of the return on the time they are asked to invest.

These perceptions may have an impact on the quality of the ratings provided by the raters. Ways of managing these perceptions are discussed in detail below.

Familiarize raters with competencies. A competency is a characteristic of an individual that is related to effective performance in a job or leadership role. In the case of leadership development, it’s typically what an individual knows and does, including knowledge, skills, abilities, and personality characteristics. The scales on a 360 degree feedback survey are made up of related items that are grouped together on a feedback report to form a competency.

Give the raters the opportunity to review each competency before they respond to the actual survey. Include a description of each competency, along with the scale name and individual items (questions). Helping raters to understand the scales and the specific behaviors of those scales can help them to provide better feedback to the participants.

Give raters examples of behavior within the organization. A rating scale is a structured way for responding to items on an instrument. Most rating scales provide the rater with a numeric scale (usually a five-point scale) on which each point of the scale is anchored by a descriptive term, such as agree or strongly agree.

Give raters examples of behavior within the organization to ensure that all raters will respond to the rating scale using the same metrics. That is, everyone will agree on what behaviors constitute a 5 on the rating scale and what behaviors constitute a 1.

First, develop a set of examples that correspond to each response scale rating for each behavior being assessed. The raters then receive a list of the behaviors that correspond with performance levels and the response scale. Consider communicating this kind of information through either a workshop or printed materials prior to administering the survey.

Improve the raters’ behavioral observation skills

For at least one week before administering the survey, ask raters to keep a journal or record of their observations of behaviors assessed in the survey. Ask them to write down descriptive, not evaluative, comments that support their ratings. Although this may sound threatening, consider the benefits. Keeping a record of observations can reduce the tendency of raters to generalize a positive or negative incident to other performance dimensions. Keeping a record or journal may also help raters to identify behavior patterns instead of rating based only on incidents that just occurred.

Explain how the raters were selected

We recommend allowing the participants to choose their own raters. Although participants may attempt to choose only those raters who will give them positive ratings, participants are often surprised at the honesty of their raters in identifying their developmental needs. Providing ratings should be voluntary—raters should not be forced to do so.
Because of their working relationships, supervisors and direct reports typically understand why they were chosen as raters. It is not as easy for peers, customers, or family members to understand why they were chosen. Some raters may think that they were chosen because they will provide high ratings. When it comes to choosing peers, participants should be instructed to consider selecting those people with whom they have an interdependent working relationship.

Clarify rater anonymity and accountability

Anonymity means that no one will be able to tell which feedback a particular rater provided. It is a safeguard to ensure that the information gathered is candid and accurate. Anonymity also protects raters against retaliation for providing negative feedback. Anonymity usually is ensured by averaging the individual ratings within a particular rater group (direct reports, peers, and so on) and by not reporting the ratings for a group unless the actual number of raters exceeds a minimum number (usually three).

Research has shown that anonymous raters are more likely to provide accurate, objective feedback than are their counterparts who are required to identify themselves. Many raters are concerned about retaliation or punitive consequences for their ratings, especially when the person they are rating is their boss. Explain whose scores will be anonymous and whose will not. Because most participants have only one boss, it is usually not possible to make the boss’s ratings anonymous. Make raters aware that they are accountable for providing accurate, honest, and meaningful responses. One good way to encourage this is to ask raters if they would be willing to provide examples of behavioral specifics on areas from your feedback that may be surprising to you. Some raters may be willing to provide a copy of their journal. Another way is to explain the outcomes of this process.

Explain the outcomes of the process

Raters are asked to invest time in the development of others, and it is important to consider what raters might expect in return. These outcomes may include the following:

  • Increased awareness of performance and work-related behaviors
  • Increased awareness of raters’ expectations
  • Greater alignment of performance expectations between employees and the organization
  • Improved communication about work expectations
  • Improved work behavior

One goal of a 360 degree feedback process is to provide feedback on strengths and areas for development to the participants. Remind raters that they need to provide information in both areas. After reviewing the feedback, participants should send their raters follow-up notes thanking them for taking the time to help with their developmental process. Afterward, the participants may want to share with their raters the insights, goals, and challenges that resulted from receiving their feedback.

Change rarely occurs overnight, nor does it occur in a vacuum. Raters need to understand that they can help by providing ongoing support to the participant.

360 Degree Feedback: Administering the Assessment and Processing the Results

The assessment process should be administered in a manner that is consistent and fair for all participants and raters. The data collection and processing should be both efficient and cost-effective. For small interventions with a limited number of participants, the data can be keyed into various software programs designed for this purpose. For larger interventions, using optically scanned answer sheets will reduce errors and speed up the process. Applications also exist that allow the data to be collected via the Web or e-mail.

The feedback report should contain clear information as to how the recipient is perceived by his or her raters. If appropriate, the report may contain statistical comparisons to others in the organization and industry norms from outside organizations. The final feedback package should be delivered in a sealed envelope. It should contain a summary of the feedback, including the questions responded to by the raters, directions for interpreting the results, and action-planning worksheets.

360 Degree Feedback: Delivering Feedback

To successfully deliver the feedback, experienced coaches should help the recipients reach their own conclusions about the data. The feedback coaches should make sure the recipients understand the process. They should convince the recipients to go past the data to the meaning and convince them that conflicting views can be valid.

Effective delivery can be provided either one-on-one or in a group session. The coach should be familiar with the report format and prepared to answer questions from the recipient. The coach should make a positive presentation that will convince the recipient to start action planning immediately.

Effective feedback is

  • Individualized
  • Clear and unambiguous
  • Accurately worded
  • Well presented
  • Focused on modifiable behavior
  • Goal directed
  • Timely
  • Affirming and reinforcing
  • Sensitive
  • Unenforced
  • Descriptive
  • Specific
  • Validated
  • Charted

Ineffective feedback is

  • Evaluative and judgmental
  • Insensitive to the recipient’s ability to use the feedback productively
  • Poorly timed
  • Labeling
  • Discounting (writing the person off as a bad debt)
  • Delivered indirectly
  • Innuendo
  • Faint praise
  • Focused on the recipient’s intentions

360 Degree Feedback: Supporting Development

Providing 360 degree feedback to the participants is only the beginning of the development process (McCauley & Moxley, 1996). Feedback should not be seen as the sole developmental event—it should be seen as unfreezing the participants’ perceptions of the developmental process. The most important follow-up to feedback is the development plan, in which the participants identify important goals for improvement and develop strategies for meeting these goals. The goals should be important, specific, relevant, and focused on behaviors that can be changed or skills that can be learned. Furthermore, it is better to limit the number of goals instead of trying to work on too many goals at once.

The next step is to outline the learning strategies that will help the participants meet their developmental goals. The participants should employ multiple strategies to maximize their learning. They also should integrate their development plans with their work rather than viewing them as separate from one another. This is often a new way of thinking for participants. They may think of development as something that is done away from the job and that detracts from their work. Leaders, however, may learn the most through the challenges they face at work. After reflecting on their learning experiences, leaders should be able to see that they need to consciously shape their work experiences into opportunities to learn.

Learning strategies that fall into the following four broad categories.

  1. Experience: Participants need time to practice their new skills and behaviors. If there are few opportunities to practice in the current job, the participant should identify other ways of adding such challenges, such as being assigned to a cross-functional task force. The participant should also identify what new jobs or assignments could help him or her further develop; however, these experiences need not be limited to the workplace. There are also volunteer experiences in the community that can provide the necessary challenges.
  2. Ongoing feedback: As the participants practice their new skills and behaviors, they need ongoing feedback about how they are doing. Each participant should identify at least one person who can provide candid, helpful, and specific feedback. The participant should talk with this person at regular intervals to get specific feedback.
  3. Role models and coaches: Beyond obtaining ongoing feedback, participants can ask other people to play important roles in their learning. One strategy for developing a skill is to observe a role model performing that skill. The aid of a coach, one who knows the development goals and can give advice and support, should also be enlisted.
  4. Training and reading: Books and training programs can also provide expertise and strategies for developing in targeted areas. Skill-building training programs often provide a “safe” place to practice new behaviors. They are safe because the consequences are minimal, and participants may be willing to take more risks in trying new behaviors and skills.

Even the best development plans may fail if the participants do not receive support in their implementation or if their organizations do not support development in general. The most direct level of support is usually the boss. Participants should engage their bosses in development planning and implementation and get their bosses’ input. Participants should share what they have learned through feedback, clarify their development plans and goals, and talk about how their bosses can help with the development process. Bosses should follow up with their direct reports who have been through a 360 degree feedback process and let the direct reports know that they want to play a role in the development process.

How can you tell if an organization supports development? In such organizations, there are expectations for continuous learning and growth. In organizations that support development, we see the following:

  • Senior leaders who participate in the 360 degree feedback process and actively pursue their own development goals
  • Succession-planning processes that place leaders in jobs that will stretch and develop them rather than placing them in jobs that they can clearly do
  • Bosses who are held accountable for the development of their direct reports
  • Employees who have developmental tasks for their jobs each year
  • Leaders who have easy access to others for coaching and mentoring
  • Mistakes being used as potential learning experiences rather than being detrimental to a leader’s career

360 Degree Feedback: Evaluating the Process

As with any process, it is important to assess the impact of 360 degree feedback. We recommend using a four-step evaluation process.

  1. Evaluation planning: Develop evaluation objectives. Provide feedback or evidence regarding the impact of 360 degree feedback initiatives. Inform curriculum development efforts and/or initiative improvement. Investigate the connection between leadership and organizational impact. Match evaluation objectives with solutions.
  2. Data collection: Examples include questionnaires, interviews, focus groups, observation, and performance data. Action planning, customer input, self/peer/boss assessment, and coaching data may be used.
  3. Data analysis: To what extent do the 360 degree feedback ratings correlate to performance outcomes valued by the organization? How can the organization improve the 360 component of the leader development initiative to increase impact?
  4. Reporting: The impact evaluation report will include an executive summary, a description of the evaluation deliverables and evaluation questions, an overview of how data were collected and analyzed, a data summary organized by evaluation questions and important themes, recommendations, and appendices (a copy of all tools used to collect data).