Acceptability of Behavioural Interventions in the Social Sector: Embracing the Complexity
- by Buoyancy Works
- |
- - 7 min read
Why this matters now
Ask any advocate: forms and steps accumulate. A new funder brings new metrics, three questions are added; the funder leaves, the questions stay. Another consent, another upload, another ‘must-do’—until the process itself becomes a barrier. Over time, those add-ons become administrative burdens—learning costs, compliance costs, and psychological costs—that suppress take-up and widen inequities. Evidence shows that simplification and salient reminders can materially increase benefit claiming, while poverty-related cognitive load makes every extra step harder, especially for people juggling scarcity. Meanwhile, staff are swamped and clients wait. Behavioural interventions, and behaviourally-informed software platforms give us a way to remove friction without removing choice. Our task is to shape the choice architecture that inevitably exists—openly and respectfully—so it works for the people who use it and the people who deliver it. The question is not “Should we do this?”—choice architecture exists in every service already—but how to do it well, ethically, and acceptably for the communities we serve (Sunstein, 2015).
What “acceptability” actually means (and why it predicts success)
In research literature, acceptability isn’t a vibe check—it’s a multi-component judgment that blends how something feels, how hard it is, whether it makes sense, whether it seems effective, any opportunity costs, ethical concerns, and people’s confidence to use it. Those seven components make up the Theoretical Framework of Acceptability (TFA), which is now widely used across health and social care (Sekhon, 2017).
Crucially, acceptability behaves like a system property: it emerges from context (norms, trust, stigma) and changes over time. It both shapes and is shaped by engagement and outcomes—so teams should measure it and iterate, not assume it.
Practical translation: treat acceptability as a moving target. Check it before (anticipated acceptability), during (experience and burden), and after (perceived effectiveness and fit). Use the results to adapt design, scripts, and delivery (Perski, 2021).
Ethics: from discomfort to safeguards
Three evidence-based points can reassure boards, managers, and social-work colleagues:
- Choice architecture is unavoidable. Because every program already structures choices (forms, order of steps, default appointments), the ethical question is how to design that architecture to promote welfare, autonomy, and dignity without removing choice (Sunstein, 2015).
- Common concerns are well-mapped. A recent systematic review shows the debate clusters around four issues: autonomy, welfare, long-term side-effects (e.g., dependency or mistrust), and democracy/deliberation. Use these four as your ethical checklist (Kuyer, 2023).
- Transparency matters—but isn’t enough on its own. Pair plain-language transparency with proportionate justification, documentation, genuine opt-out, and oversight. This shifts practice from “is nudging manipulative?” to “is this intervention justified, accountable, and choice-preserving for this population?” (Sunstein, 2015)
A minimal, workable operating model for Non-Profits
Many programs carry extra steps that no one meant to keep. This model offers an easy reset: Design a smaller, kinder path with the people who use it, Govern it with light, transparent safeguards, and Learn from real-world use before rolling it out more widely—all while keeping dignity, choice, and staff capacity at the centre.
1) Design (EAST)
When you want to encourage a behaviour, make it Easy, Attractive, Social, and Timely (EAST). EAST is a practitioner-friendly gateway to behavioural design—perfect for co-design sessions with staff and lived-experience partners. (Behavioural Insights Team 2014)
Examples:
- Easy: pre-filled benefit forms; “resume later” links that actually work.
- Attractive: short messages with a clear next step and a visible primary button.
- Social: “Most clients book within 3 days—pick a time that works for you.”
- Timely: reminders tied to pay cycles, childcare schedules, or transit timing.
- Start with friction, not features. Map the current path; name the top 3 drop-offs or pain points (clients and staff).
- Co-design small fixes. Generate options with EAST (Easy, Attractive, Social, Timely), then stress-test for agency (choice-preserving, transparent intent)
- Use service patterns, not one-offs:
- Active choice (offer 2–3 viable next steps + “decide later”).
- Progressive disclosure (short path now; optional depth later).
- Fair defaults (pre-select helpful options people can easily change).
- Salient reminders (right channel, right time, with a clear “do/skip” action)
- Design: a one-page brief (“what we’re changing and why”), success metrics, and an accessibility/trauma-informed check.
2) Govern (OECD BASIC-lite)
Adopt a light version of OECD’s BASIC toolkit to structure your projects end-to-end: Behaviours → Analysis → Strategies → Interventions → Change. Maintain a one-page public log (“what we tested, why, results, how to opt out”). This creates accountability without heavy bureaucracy and helps with funder or board queries. (OECD, 2019)
- Transparency & consent: plain-language “why this nudge,” easy opt-out/opt-down, alternatives available.
- Proportionate data: collect the minimum to deliver the benefit; document retention; use Canada-hosted storage.
- Accountability: name an owner, a red-team reviewer (not on the build squad), and a monthly review cadence.
- Public log (one page): what we tested, why, safeguards, results, how to complain/opt out.
- Definition of done (govern): one-pager published; oversight cadence scheduled; escalation triggers defined.
3) Learn (acceptability & equity monitor)
- What to measure: acceptability (does this feel right and useful?), outcomes (does it help?), and equity (for whom?).
- How to measure (framework-level):
- Map a few [TFA] domains to each intervention (e.g., affective attitude, burden, perceived effectiveness, ethicality).
- Sample at three touchpoints—pre (anticipated), in-use (experience), post (perceived effectiveness)—using fast, low-literacy items plus a brief free-text prompt.
- Track by site, channel, and subgroup to spot potential differences between unique groups.
- Decision gates (plain language)
- Scale when: key outcomes clearly improve (e.g., faster bookings, fewer no-shows, more complete files), client and staff feedback is mostly positive, and no equity red flags appear (no subgroup doing worse). Hold these gains for two consecutive check-ins, then roll out.
- Tune & re-test when: results are mixed or feedback points to friction/confusion. Make one or two targeted tweaks, then run another short pilot before deciding again.
- Pause or retire when: outcomes slip, opt-outs or complaints rise, or you see signs of harm or unfairness. Stop the trial, publish what you learned, and consider an alternative design.
- While scaling: keep opt-out easy, publish a one-page summary (“what we changed and why”), and schedule a follow-up review 4–6 weeks after rollout.
- Continuous Learning: Summarize results, record decisions, update SOPs.
Plain Language Answers to Common Worries
“Is this manipulative?”
We preserve choices (no lock-ins), state our intent in plain language, publish what we test, and invite feedback at every step. Clients can opt out or choose an alternative path.
“Does this respect clients as capable decision-makers?”
We use active choice (clients pick goals, reminder timing, channels) and pre-commitment options only when they support a self-stated objective. The point is to remove friction, not remove agency.
“What about Canadian context and legitimacy?”
The Government of Canada’s Impact Canada hub has built capacity for behavioural science and a 500+ member community of practice. The approach is mainstream in public services here.
A 30–60–90 day roadmap for small teams
Days 1–30: Establish guardrails & inventory friction
- Publish a 1-page Behavioural Insights policy (purpose, transparency promise, opt-out, contacts).
- Run a quick “friction inventory” with staff and clients: where do people stall, abandon, or ask for help?
Days 31–60: Prototype + acceptability checks
- Co-design 2–3 EAST-aligned tweaks for one journey (e.g., benefits intake).
- Set up three acceptability items and a short free-text question.
- Pilot with n≥50 clients or ≥2 weeks of real traffic.
Days 61–90: Decide and scale responsibly
- If acceptability and outcomes both clear the bar: document, publish the one-pager, train staff, and scale.
- If acceptability is weak (e.g., high burden, low perceived effectiveness): refine or retire; don’t force adoption.
An “Acceptability Checklist”
- Preserves choice. (easy opt-out/opt-down; fair defaults; alternatives available; active choice where stakes differ)
- Uses plain language and is transparent. (what’s offered, why, expected benefits/risks, data handling; avoid jargon)
- Has been co-designed with affected groups and honours lived experience. (co-create with lived experience; cultural/language fit; accessibility & trauma-informed review)
- Measurement framework in place for baseline, intake, touchpoints, exit. (brief pulse checks pre/during/post; short free-text; low burden; feed results back to teams)
- Summarized results, available to stakeholders. (one-page summary/public log; what changed and why; who to contact)
- Channel for reporting concerns and clear oversight. (named owner; simple complaint/appeal path; periodic review; publish decisions)
- Monitor for equity and unintended outcomes. (track heterogeneous impacts; who benefits least/most; who opts out and why; adapt or retire if gaps emerge)
Bottom line
The systems we’ve built to help people have, over time, gathered extra steps and small snags that slow them down. Behavioural interventions give us a gentle way to clean that up—making the next step clearer, reducing effort, and keeping real choice intact. The aim isn’t to persuade people to do what we want; it’s to remove the friction that gets in the way of what they already want to do.
Acceptability is our safeguard. When we co-design with clients and staff, are transparent about what we’re doing and why, and keep opt-out easy, we protect dignity and trust. And when we listen before, during, and after—using light, regular feedback—we can spot where something feels off, adapt quickly, and make sure improvements work for different people in different circumstances.
A simple loop keeps this practical: Design small, humane changes (EAST), Govern them with light guardrails and accountability (OECD BASIC), and Learn from real-world use (acceptability + outcomes) before scaling. Run it on any journey—intake, reminders, scheduling, coaching goals—and only roll out when results are better and feedback is positive, with no equity red flags.
Done this way, behavioural interventions aren’t a bolt-on tactic; they’re a respectful way to restore flow to services, lighten the load on frontline teams, and help people move forward on their own terms.
About Buoyancy Works
Buoyancy Works is a Calgary-based social purpose company dedicated to empowering individuals through behavioral science and technology. We help frontline organizations, coaches, and advocates better support their clients—whether they’re working toward greater stability, seeking employment, or building financial resilience. Our platform is designed to make everyone’s life easier: it streamlines the work of staff by reducing administrative burden and offering evidence-backed tools they can use in real time, while providing clients with personalized guidance and structure that feels clear, encouraging, and accessible. By making it easier for coaches to do what they do best—build trust, provide support, and guide progress—Buoyancy Works strengthens outcomes across stabilization and economic empowerment domains, while improving the experience for everyone involved. The platform aligns with tools like the Sustainable Livelihoods framework and Resiliency Matrix by supporting holistic, client-centered approaches that recognize the complex interplay of assets, challenges, and progress across multiple life domains.
Learn more at buoyancy.works.
Acknowledgement
Portions of this article were developed with the assistance of ChatGPT, an AI language model by OpenAI, used under the direction of the Buoyancy Works team. Final content reflect the interpretation and decisions of the Buoyancy team.
References
Sunstein, C. R. (2015). The Ethics of Nudging. Yale Journal on Regulation.
https://openyls.law.yale.edu/bitstream/handle/20.500.13051/8225/15_32YaleJonReg413_2015_.pdf
Kuyer, P., & Gordijn, B. (2023). Nudge in perspective: A systematic literature review on the ethical issues with nudging. https://journals.sagepub.com/doi/10.1177/10434631231155005
Sekhon, M., Cartwright, M., & Francis, J. J. (2017). Acceptability of healthcare interventions: an overview of reviews and development of a theoretical framework. https://bmchealthservres.biomedcentral.com/articles/10.1186/s12913-017-2031-8
Sekhon, M., et al. (2022). Development of a theory-informed questionnaire to assess the acceptability of healthcare interventions. https://bmchealthservres.biomedcentral.com/articles/10.1186/s12913-022-07577-3
Perski, O., & Short, C. E. (2021). Acceptability of digital health interventions: embracing the complexity. https://academic.oup.com/tbm/article/11/7/1473/6272497
OECD (2019). Tools and Ethics for Applied Behavioural Insights: The BASIC Toolkit. https://www.oecd.org/en/publications/tools-and-ethics-for-applied-behavioural-insights-the-basic-toolkit_9ea76a8f-en.html PDF: https://www.oecd.org/content/dam/oecd/en/publications/reports/2019/06/tools-and-ethics-for-applied-behavioural-insights-the-basic-toolkit_bbbaaa7a/9ea76a8f-en.pdf
Behavioural Insights Team. EAST: Four Simple Ways to Apply Behavioural Insights.
https://www.bi.team/publications/east-four-simple-ways-to-apply-behavioural-insights/ PDF (2014 update): https://www.bi.team/wp-content/uploads/2014/04/BIT-EAST-handbook.pdf PDF (2013 edition): https://www.bi.team/wp-content/uploads/2015/07/BIT-Publication-EAST_FA_WEB.pdf
Government of Canada – Impact Canada: Behavioural Science hub. https://impact.canada.ca/en/behavioural-science
Scarcity: Why Having Too Little Means So Much. Sendhil Mullainathan & Eldar Shafir. (2013). New York: Times Books.

