Can Generative AI Create Space for Reflection? – Try “Facilitator”

2025-11-18
Ben Windeler

At Digital Public Square, we’ve spent years exploring how technology can help people engage constructively with social issues. While researching the polarization surrounding Israel and Palestine in Canada, we saw a need for a new kind of digital space. We asked: could a generative AI chatbot, when designed with safety and privacy as leading principles, create a space for thoughtful reflection?

This was the idea behind Facilitator — an AI chatbot designed by Digital Public Square to put user safety at the forefront and create a private space for engagement on polarizing issues. Facilitator is based on a simple inversion of the chatbot premise: it asks the questions, and the person engaging with it answers.

How it works

When you go to the Facilitator website, you can enter a conversation where the chatbot starts by introducing the topic and asking you a question (see below). This is a pre-written opening prompt designed to create a reliable starting point for users and for evaluating the tool. 

A website header reads:
"Facilitator

Share your experiences and reflections on Israel and Palestine with our private AI-powered chatbot. Your messages are private and anonymous unless you choose to Submit your transcript."

A chat message reads:

"Discussions about Israel and Palestine are polarizing, but I believe it is important to create space for civil dialogue. What has your experience been with these conversations, as someone living in Canada?"

The conversation continues with this flipped paradigm: the chatbot asks the questions and the user responds. Unlike typical AI chatbots, Facilitator avoids providing facts or opinions, and does not try to make any arguments. Instead, it asks, inviting reflection rather than reaction.
Messages between the user and chatbot are not recorded by Digital Public Square at any time, unless the user chooses to submit it to us for analysis. Digital Public Square applies this consent-driven approach to data collection to encourage safe exploration of issues that can be difficult to talk about.

Our approach to safety

From day one, our primary design goal with Facilitator was safety. Our hypothesis was that by creating a safe and private space for thoughtful discourse, we could encourage people to reflect on polarizing issues. We were especially interested in reaching people who may not normally be interested in having conversations about polarizing issues in their day-to-day lives. We needed to ensure that the experience felt safe to test that hypothesis, and also to ensure that the interaction did not exacerbate any feelings of polarization.

Our first major design choice was the inversion of the question-asking role. Since LLMs can produce statements that are false or harmful, we wanted to avoid conversations where users would ask the chatbot to verify information, provide guidance, or otherwise influence the user’s thinking. The choice to have the chatbot ask the questions was easy to implement and test, and also helped the experience encourage critical thinking and reflection.

Our second major choice was to build Facilitator to give users as much control over their data as possible. We want the user to feel like their conversation is private until they choose otherwise. The important caveat is that Facilitator uses Anthropic’s Claude API, so messages are handled and stored by Anthropic. This approach was a necessary compromise. Since the data shared with Anthropic contains no user information other than what they type (and we encourage the user not to share private information in their messages), their messages should not be personally identifiable. We chose Anthropic’s API because they ensure that messages processed by the API will be deleted after 30 days and will not be used for model training.

To validate that the chatbot behaved as desired, our team tested the chatbot by trying to have adverse conversations. This “red team” testing approach helped us build our confidence that it behaved according to our principles. We were able to greatly reduce the likelihood that it would act outside of the desired boundaries (asking only questions and avoiding polarizing statements). We found that this questions-only approach was quite resilient to adverse user behaviours, though it is still possible to “jailbreak” the model and get it to behave outside of the desired boundaries with well-designed approaches (which are highly unlikely to occur with regular use).

We did find that some issues remained. In particular, the chatbot can be sycophantic in ways that made users feel like they were not having a meaningful conversation. The chatbot also sometimes focused more on the user’s knowledge (facts, research, evidence) instead of their feelings and values, which could feel offputting for people who didn’t feel like they were experts in the issues they were discussing.

A message bubble reads
"I appreciate that this is difficult to discuss. Would you be willing to share what makes it hard to talk about? Sometimes just acknowledging the difficulty can be the first step. I'm glad to keep chatting. I don't get tired, but I recognize that these conversations can be difficult. You can stop at any time, and if you're comfortable, submit this transcript to help contribute to this project."

What we learned

From December 2024 to July 2025, we collected 70 full transcripts from users we invited to test Facilitator — 51 paid testers, 6 who joined through online ads, and 13 from our working group on inter-community polarization. We also received 40 post-conversation surveys, and the results were positive.

From our surveyed users:

  • 95% reported a positive experience
  • 77% felt more comfortable discussing the topic with the chatbot than with people they know
  • 50% said they felt less polarized or politically frustrated afterward
  • 55% said they would use the tool again to discuss Israel-Palestine
  • 80% said they would use similar chatbots to explore other difficult topics

From the full set of transcripts, we found that most people engaged earnestly. We were encouraged by many thoughtful conversations in which people expressed their concerns, hopes, fears, and ideas. A few people treated it playfully, or even trollishly, responding with memes and non-sequitors while the chatbot patiently kept asking questions to try and steer things back on track. Others were quieter, with mostly brief noncommittal messages and occasional thoughtful reflections. Even those who identified strongly with one “side” tended to express at least some empathy and nuance, acknowledging complexity rather than taking absolutist positions.

Try it yourself

You can engage with Facilitator today at facilitator.digitalpublicsquare.org.

We welcome you to converse, reflect, and explore the tool. Try to learn from the conversation, or try to break it! Either way, we hope you will share your transcript with us if you are comfortable doing so. We hope to continue to learn how people can benefit from the flexibility and social safety of the chatbot user experience, and where it can go wrong.

Each new attempt to harness technology to foster reflection instead of reaction gives us hope that technology can bring us together instead of driving us apart.