The Agendashift survey debrief: Alternate cuts

One very encouraging sign of health in the Agendashift community is the way things are beginning to happen without my personal involvement. It really is getting a life of its own! For example, our Slack has spawned a #bookclub channel for coordinating a weekly get-together, and partners have been meeting to share and experiment with different ways to debrief their Agendashift surveys.

In the latter case, some of that learning has been incorporated into our survey reporting tool, the ‘unbenchmarking’ report. Here are three demos, using an illustrative subset of data from the 2017-18 global survey (which is one way to try a mini assessment for yourself), and a new feature that makes it easier for facilitators to choose which report sections to share and in what order.

Demo 1: The classic report

First off, here’s what could be described as the ‘classic’ unbenchmarking report, with sections pretty much in the order described in chapter 2 of the book, plus two sections (Above profile and Below profile) that rely on some machine learning functionality that was beyond the scope of the book:

Tip: Page through the several sections of the report using the PgUp and PgDn buttons (see the navigation control top right) or the corresponding buttons on your keyboard or presentation clicker.

Demo 2: The minimum viable debrief

If you’ve read the book, you’ll know that I encourage facilitators to move quickly over the early sections of the report, leaving plenty of time for the two sections that identify (respectively) areas of agreement and disagreement. Partner Olivier My takes that advice and turns the dial up to 11 with his ‘minimum viable’ debrief:

Here we’ve dispensed even with the cover page, going straight to the categories and prompts with the narrowest spread of scores, ie areas of agreement:

Screenshot 2018-03-12 13.05.10

Interestingly, we seem to be in agreement mainly on weaknesses! There’s a good chance that there are some quick wins here.

Now that we’re comfortable with the data, let’s go to the second page of Olivier’s report:

Screenshot 2018-03-12 13.28.34

Very different! The issue here isn’t obvious weakness, it’s inconsistency – whether that’s of actual behaviour and outcomes or of perception. Some typical questions for the facilitator to ask here:

  • “Who scored a 3 or 4 for this one and wouldn’t mind sharing their thinking?”
  • “Who can imagine why someone might score a 3 or a 4 here?” (a safer version of the previous question)
  • “Who has good examples of this working well?”
  • “Who’s different to that? Who had a 1 or a 2?”
  • “What might explain the 1’s and 2’s?”  (again, the safer version of the previous question)
  • “What’s the impact when it’s not working well?”

Demo 3: The compact debrief

Steven Mackenzie uses a debrief structure very different to mine, narrowest and widest alternating with strongest and weakest:

  1. Score distributions
  2. Categories
  3. Categories and prompts with the narrowest spread of scores
  4. Strongest categories and prompts
  5. Categories and prompts with the widest spread of scores
  6. Weakest categories and prompts

(Full report beginning at the Contents page here)

It took me a while to get this but I’m keen now to try it for myself at my next available opportunity. Here’s how Steven explains it:

  1. Categories as bar charts  – Easy introduction to the spread of data
  2. Categories as sliders  – I explain the slider visualisation of the same data, helping people understand the slides to come
  3. Areas of strongest agreement – I expect the group to recognize these behaviours, and be mentally comfortable to accept what follows
  4. Highest scores – I will congratulate on some specifics here, offer them option to talk if they are passionate, but warn that we’re seeing variation in responses creeping in
  5. Widest variation: I will ask for discussion here
  6. Weakest scores: I will ask for discussion here

So there you go: community learning captured in the technology!

Experience it first hand

Of my upcoming public workshops, that “next available opportunity” is the 1-day workshop on April 6th in Raleigh:

The survey debrief is the catalyst for a generative process in which we generate the outcomes that represent action areas, themes, goals and ambitions for the organisation’s transformation. This is session 2 of a 4 or 5-session public workshop, and easily a workshop in its own right if you’re doing it privately with a client or your employer.

As already mentioned, you can try the mini assessment by participating in the 2017-18 global survey; there’s also a free trial, allowing you to survey small teams. Full partner status gets you a range of full-sized templates, all our workshop materials, and (under your control) a listing in our partner directory.


Agendashift-cover-thumbBlog: Monthly roundups | Classic posts
Links: 
Home | About | Partners | Resources | Contact | Mike
Community: Slack | LinkedIn group | Twitter

We are champions and enablers of outcome-oriented change and continuous transformation. Building from agreement on outcomes, Agendashift facilitates rapid, experiment-based evolution of process, practice, and organisation. Instead of Lean and Agile by imposition – contradictory and ultimately self-defeating – we help you keep your business vision and transformation strategy aligned with and energised by a culture of meaningful participation. More…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s