360 Degree Assessment

by Julia Miller

The 360 degree process

There are two main aspects of the 360 degree process that you may become involved with. The first is the design of the instrument to be used. The second is the process itself, if you or members of your team are participating in this. This participation could be as a rater, as someone who is being rated or as someone involved in giving feedback on an individual’s 360 report.

Design and selection of the 360 instrument

The 360 degree feedback process is organised and administered through a software package, usually called a ‘360 degree instrument’. There are now a large number of 360 degree instruments on the market. Many are available online, others come as written forms, but they all use software to derive the report from the ratings.

  • Some are ready-made, measuring participants against others who have completed the questionnaire in the past to give normative data.
  • Other companies provide shell instruments that you can adapt to a certain extent to fit your own organisation’s needs.
  • You can also – and this is the route that is most usually taken – choose tailor-made services, designed in collaboration with an external consultant and either based on your own competency frameworks or starting from a blank sheet of paper.

This software package has two parts:

  • A list of questions about the individual being assessed. This list will be given to each individual rater. The questions ask the rater to think about how often, or how effectively, the individual carries out certain behaviours that are considered critical to their job (see Questionnaire and report examples).
  • The structure of a report for each individual, which will be completed and produced from the responses given by each rater. This report will show you how the responses for each behaviour compare across the span of raters: for example, how your manager rates specific behaviours, compared with how your peers rate you and with how you might rate yourself.

What are your objectives?

Whether you decide to involve an external consultant or not, below are some of the questions you need to think about.

  • What is the overall objective of the 360 – is the aim to achieve cultural change throughout the organisation, for instance, or is it to look specifically at team skills?
  • Are there specific areas of concern – for example, am I focusing on key skills?
  • Do I want to encourage the development of self-awareness or am I aiming to identify the training and development needs of a particular group?

Who will be involved?

Having thought through your objectives, the next stage is to consider who will be taking part:

  • Who is this actually for?
  • Who will be the participants? Is it for the whole of my management staff or board level?
  • Am I measuring leadership competencies only, or management level competencies, or am I looking more at developmental issues? (If you are measuring leadership or management, you might want to place more emphasis on upwards and downwards, 180 degree feedback.)
  • Am I looking at team-working across the organisation or only at certain levels?
  • Am I considering supervisory staff only?
  • Who will be the raters and who will ensure that they are fully briefed on the process? (Here, you need to consider whether the raters will be able to be sufficiently objective.)

How do I choose the questions?

There are various starting points from which you may choose to develop your list of questions:

  • Standard or national competencies
  • Mission statements
  • Organisational values
  • Management standards
  • Your own organisation’s competencies.

You then need to think about question design. If you are basing your 360 on competency frameworks, this should be fairly straightforward. If not, you need to consider various techniques:

  • Observing and noting how the person does the job and developing questions from those observations
  • Asking people who do the job to describe it in order to choose relevant questions.

Two more methods of developing appropriate questions – critical incident analysis and repertory grid techniques – might be used if you are employing professional consultants to design the questions for the instrument.


Make sure that the questions asked are clear, unambiguous and based on concrete, observable behaviours.

What type of rating scale should I use?

There are various types of rating scale that you can use, but often they fall into one of four categories, each of which has a different purpose.

  • Effectiveness scales consider how effectively the individual demonstrates specific competencies and behaviours.
  • Potential scales are commonly used for succession planning and ask raters how well an individual might perform in the future.
  • Ranking scales ask raters to compare the individual against some highly effective leader they have experienced.
  • Frequency scales are very common, as they are non-judgemental and are a less subjective form of measure. They ask how often the individual has demonstrated specific behaviours.

What should the report look like?

Decide how you want your final report to look and whether the process is to take place on line or be mainly pen-and-paper based (see Examples).

  • Will the report contain lines or graphs?
  • Will results be measured against your own organisation’s norms or against actual scores?
  • Do you want to show all rater responses or averages only?
  • Do you want to see the rater responses by group and if so which groups (for example, team, customers or other departments)?
  • Always pilot your 360 and make sure your pilot sample represents your final users.
  • Ask people if it worked.
  • Re-run it, if necessary, to get it right before rolling it out across the organisation.
  • Confidentiality is vital, so can you encrypt the data and make sure your administrator is as neutral as possible?
  • Evaluate the success of your 360. You can compare changes over time on the key competencies being measured.