Which assessment template?

[Update: I’ve summarised the options in a table near the bottom of this post]

A small enhancement got released today, causing assessment templates to be listed in a sensible order when you’re creating or editing a survey:

Screenshot 2017-06-01 14.51.31

They’re organised by family, edition, and language.

Family

Currently, there are two families:

  1. Agendashift values-based delivery assessment – this is the main one, the template we’ve been iterating on since 2014
  2. Agendashift values-based adaptability assessment – this is new, developed in parallel with chapter 5 of the new book

Unless you specifically want to assess your organisation’s ability to make change happen, you almost certainly want the first one.

Edition

This specifies both the structure and the size of the assessment:

  1. Original edition – the full-sized template (43 prompts at the latest count), structured by value (transparency, balance, collaboration, etc)
  2. Mini edition – like the original edition, but only 18 prompts (3 per value)
  3. Pathway edition – with minor variations the same prompts of the original edition, structured not by value but by the steps of Reverse STATIK
  4. Mini pathway edition – an 18-prompt version of the above
  5. Featureban edition – the mini edition, re-purposed (see the Featureban home page)

Partners (and their clients) have the full range of templates available to them. The free trial gives access to the mini and mini pathway editions only.

Surveys are generally conducted using the original or mini editions. The pathway or mini pathway editions come into play later when creating a transformation map (chapter 3).

Language

As shown in the screenshot, all of these combinations are available in English (EN), and most of them in French (FR) and German (DE) also. The original and mini editions are also available in Spanish (ES), Hebrew (HE), Italian (IT), Dutch (NL), and Russian (RU).

Summary

Here are all the options summarised in a table:

 

Family  
Edition Agendashift
values-based delivery assessment
Agendashift
values-based change assessment

Languages

Original

Partners

n/a

EN, DE, ES, FR, HE, IT, NL, RU

Pathway

Partners

n/a

EN, DE, FR

Mini

Trial, Partners

Trial, Partners

EN, DE, ES, FR, HE, IT, NL, RU

Mini pathway

TrialPartners

Trial, Partners

EN, DE, FR

Featureban

Trial, Partners

n/a

EN

Agendashift-cover-thumb
Blog: Monthly roundups | Classic posts

Links: 
Home | Partner programme | Resources | ContactMike
Community: Slack | LinkedIn group | Twitter

New feature: tagging assessments

Now that part I of the book is out, I’ve been using the #next-steps channel in the Agendashift Slack both to share plans for part II and to discuss enhancements to the online tools. I’ll blog about part II soon; this post is about a new feature that addresses a quite frequently-expressed need.

The basic need is the ability to analyse survey results in finer detail, reporting on different sub-populations – managers and staff, different teams, different roles, different projects, and so on.

There is already a crude way to achieve this, conducte multiple surveys and aggregate the results afterwards. It has two drawbacks however:

  1. The UI is very crude (it involves URL hacking)
  2. It works only if the populations are surveyed separately. That’s not always possible, it requires forethought, and it adds significant administrative overhead

Point 1 is of course fixable, but point 2 may not be. A different kind of solution is required.

Now, survey administrators and participants may ‘tag’ their assessments, as many tags as they like:

Screenshot 2017-05-25 06.18.18

And in the charts view, assessments may be included or excluded by tag:

Screenshot 2017-05-25 06.21.10

You could, for example, include all assessments tagged for a given department, but exclude assessments tagged for certain roles.

Finally, if you’re curious about the context name ‘Free trial’, it’s one way to experiment with the mini version of the Agendashift values-based delivery assessment for free. You can sign up here.


Agendashift-cover-thumb
Blog: Monthly roundups | Classic posts

Links: 
Home | Partner programme | Resources | ContactMike
Community: Slack | LinkedIn group | Twitter

New feature: cross-referencing prompts across templates

Disturbed night, so a quick new feature coded before breakfast:

Screenshot 2017-03-08 13.25.43

This excerpt shows some of the results from the collaboration and customer focus categories of a survey based on the Agendashift Values-based delivery assessment, but displayed using a so-called ‘pathway’ template.

We have long found that people respond very well to the values-based organisation of the assessment, with category headings of Transparency, Balance, Collaboration, Customer focusFlow, Leadership. Experience forces us to admit however that a plan based on these values isn’t very compelling. Where do you start? Conversely, whilst a narrative arc makes for a much more compelling plan, it would feel strange for an assessment. Hence the very useful trick (a few months old now) of being able switch templates when the time comes to work with the results.

This week’s little enhancement is to display the original index numbers of the prompts – the 3.3, 4.1, and so on in the example  – in the reorganised results. This means that in workshops, we can generate outputs against either (or both) sets of results and still maintain traceability. This in turn makes it easier to reuse the products of the Exploration session (which includes the survey debrief) in the Mapping session. Slick!

Want to try this for yourself? Some options:


Agendashift-cover-thumb
Blog: Monthly roundups | Classic posts

Links: 
Home | Partner programme | Resources | ContactMike
Community: Slack | LinkedIn group | Twitter

Explaining Agendashift’s “sliders”

I often get asked how this visualisation works, and in particular what the blue and darker grey bars do. “They’re just a measure of spread” is the quick answer. A longer answer is that their calculations can be inferred from other clues in the UI – not 100% helpful!

screenshot-2016-11-09-10-31-28

Each of the scores – 2.7, 2.9, and 3.4 in this example – is an interquartile mean, sometimes IQM or midmean. To calculate one of these, we take all the relevant scores, sort them, discard the top and bottom quartiles (and along with them any outliers), and calculate the mean of the remaining data points. It is described as a robust statistic, one that is not easily influenced by errors.

Looking at prompt 4.1 in the picture, we can say informally that the “average of the middle half” of the scores given to this prompt is 2.7 (on a scale of 1 to 4). We might guess that the majority answers lie between 2 and 3, with more 3’s than 2’s. Not “nailing it” yet, but “getting there”.

The calculations for the bars are very similar, but here we do want to be influenced by the extremes. The left and right ends of the darker grey bars show the mean of the bottom and top quartiles respectively, the most extreme answers at the low and high ends of the scale. Notice that for prompts 4.2 and 4.3, these bars extend all the way to the right. At a glance, we know that at least a quarter of the scores here were 4’s (since their mean is 4, and there can be no scores higher than 4).

The blue bars are also influenced by extremes, but moderated by more typical scores. The left and right ends here show the mean of the bottom and top halves respectively. Looking at prompt 4.3, we can infer that nearly half the scores here were 4’s. Awesome!


Agendashift-cover-thumb
Blog: Monthly roundups | Classic posts

Links: 
Home | Partner programme | Resources | ContactMike
Community: Slack | LinkedIn group | Twitter