Conducting a Design Sprint…Virtually

by Khor Le Yi, 2020

Ottodot
9 min readAug 11, 2020

--

I’ve had my doubts on whether it’s possible to conduct design sprints online. My experience has always been conducting them in person, where we can write on post-its and move them around. With COVID-19 pushing us all to telecommunicate, we decided to try running a sprint remotely.

As you might know, Ottodot is an EdTech startup that strives to make learning fun for kids. In this design sprint, our objective was to push ourselves further by creating a virtual world where kids run their own schools. I will touch on the steps we took to run this sprint. I will also share some pros and cons I have observed from running a sprint online, as well as tools I found useful.

Background Information

This sprint was conducted with a team of 4 individuals (2 Co-founders, 1 Engineer, 1 Psychologist). It was held across two weeks, and we met around 3 hours every day.

1. Getting to know the team

Before we started our sprint, we did a round of introduction where we shared our strengths and weaknesses. Working online required even better communication, and this was important to ensure that we knew how we could work with each other.

2. Identifying a hypothesis

We started the conversation with a question: What is the hypothesis we want to test out? Having a clear hypothesis is essential to ensure that the entire team stays grounded through the whole sprint. If at any point in time, we got confused, we could look back at this hypothesis and ensure that we are on the right track.

This discussion took quite long as we each had our interpretations. As we shared our views, those who weren’t speaking would jot down their points on Google Docs.

Eventually, we concluded on a hypothesis to test: Will kids want to create their schools and teach other kids?

3. Problem Statement

With a hypothesis to ground us, we were able to form a clear problem statement quickly: How might we get kids interested to create learning content and teach other kids?

4. Brainstorming for Ideas

It was important to reiterate to everyone on the team to aim for as many as possible and not be critical of our ideas. The quantity was vital. We then went on to brainstorm for ideas individually and reconvened to share.

I like to use my iPad to brainstorm as I could share the screen via the AirPlay function on Zoom. However, pen-and-paper would work just as well!

5. Converging on Ideas

This part was slightly trickier. If done offline, we would write our ideas on post-it notes and gather to share them. But online, we were not aware of any tool that could replace this experience. We decided to write down our ideas on Google Sheets. We took turns to share all our thoughts, and the rest would help to type them down.

After sharing, we all took a vote on ideas we liked (we gave each other unlimited ballots). We then used the filter function to sieve out ideas with three or more votes. We then took two ideas each to prototype.

(On hindsight, we realised that Miro would be the perfect tool to use. If you do not have access to Miro, then I guess Google Sheets is a good alternative).

6. Lo-Fi Prototyping

We took an afternoon to do up low-fidelity prototypes. I used Figma to do up my prototypes and represented graphics with grey boxes. For those who have not heard of Figma, it is a collaborative wireframing tool that designers use to create mockups for their prototypes.

Some of the team members used Google Slides to visualise their ideas.

What is most important at this stage is to represent the idea with some form of graphics. Aesthetics is not the main priority.

We then took screenshots of our ideas and pasted them on Figma. We gave ourselves three votes each and voted for the prototypes that we think is key to helping us answer our problem statement. We used red circles to represent a vote.

The reason why we all used the same colour to vote was to ensure that our votes remained anonymous as it is essential to allow everyone to feel comfortable and be honest about their opinions.

7. Hi-Fi Prototyping

After we identified the prototypes with three or more votes, we moved on to turning them into high-fidelity prototypes. We used Figma to build and link pages together according to the flow we imagined to have.

We linked up all the frames together and eventually came up with a functional game that represented the idea.

We continuously tested out each others’ segments to ensure that the user flow made sense. We also asked our friends to come in and test it out to make sure there weren’t any significant gaps in the game before conducting our user tests.

8. Preparing for User Testing

It was vital for us to be prepared for each of our user testing sessions. We made a list of questions to ask and split roles among ourselves:

Lead — This individual is in charge of interacting with the kid. He/She will ask the questions we prepared and also get the user to share more about what they are thinking.

Observer — This individual supports the Lead. Sometimes the Lead is very caught up in asking the questions prepared. They might miss out specific opportunities to get the user to share more.

Scribe — This individual takes note of specific words/sentences the user says. It is important to keep true to the exact words the user says as it could be the language we would use in the product.

Backend Support — As our tool is unable to allow for the user to type in, this individual keys in the values into the prototype from the backend to simulate user input.

Do note that these roles were designed to support the specific functions of our prototype. It would vary depending on the type of product/experience you are developing. It was also essential to give each other permission to intervene at any point of the user testing session if they feel like there are insights from the users that we want to dive into.

9a. User Testing — Breaking the ice

We conducted a total of 6 user testing sessions, where each one lasted 45 minutes to 1 hour. As our target audience were 9–12-year-old kids, we contacted their parents to ask if they are interested in being game testers. User testing sessions were held over Zoom, where we could also record the session to reference later on. We told users to use their laptops so we could observe them while they used our prototype.

At the start of the user testing session, it is crucial to break the ice, especially when your users are kids and 4 of us (adults) in the session. This is a critical moment as you want your users to be as comfortable with sharing their opinions later on when testing your product.

The approach we took to break the ice was to do a self-introduction together with a fun question:

  1. If you could have any superpower, what would it be?
  2. If you were a bubble tea flavour, what would you be?
  3. What’s the best prank you’ve played on your parents?

Some ways to test if your user has opened up is to see how spontaneous they are when they speak. Do they hesitate before speaking? Do they look like they are afraid to say something? If that is the case, it is best to continue breaking the ice. Users who are not completely honest might be afraid to share their honest opinion of your prototype with you.

9b. User Testing — Running the test

Before we dive into testing our product, we often establish to our users that they are the expert. And that it is alright for them to give negative feedback for our product, even if it means redoing it. The problem with high-fidelity prototypes is that users might feel bad to give negative feedback because they think you have put in a lot of effort. So it’s important to reassure them that this is just a prototype and we can always redo a new one quickly.

During the session, we sent a link to our prototype to them and observed how they interacted with it by getting them to share screen. It was particularly useful to follow where their mouse went as an indicator of what they were looking at.

Throughout the session, we made sure to stick to our roles. However, there were instances where the scribe and backend individual made some observations and would ask our game testers questions. Knowing that we had permission to intervene at any time was necessary for such instances.

10. Consolidating feedback

After running six user testing sessions, we consolidated a lot of feedback from the users.

It was essential for us to find common patterns among the feedback to know how to improve. We explored a new tool called Miro, and it is incredible (we do not get a commission from them, we think that it is a fantastic tool). While conducting our user tests, there were a few overarching themes we spotted, and so we started from there. We transferred all our findings onto individual post-it notes and placed them in relevant boards.

We then found subcategories for each of them and grouped them internally. Here’s an example of how it looked like before and after categorising them into subcategories:

After grouping all the feedback, we went through each subcategory to note done the essential findings and takeaways. This was important as it ensured that we looked through all the feedback we had gathered from the intensive user testing sessions.

Within each subcategory, we placed our insights on white post-its and put them in a sub-sub-category. We read through all these insights and probed each other on what they meant if we did not understand any.

Going through these insights allowed us to empathise much more with our user and thus allowed us to have a deeper understanding of what they are looking for in our product.

(Another alternative you can consider to synthesising data would be Affinity Mapping. It is a tool that I feel would provide us with a more structured way of making sense of data)

Next Steps

To reference the five stages in the design thinking process, we are constantly in an iterative process. After conducting the six user testing sessions, we decided that we needed a better understanding of our users. Thus, we went back to defining a more precise problem statement.

However, how we decide to move forward really depends on the stage your product is at. What is most important is just to try and discover what works best for you and your team. While it might feel like you’re always in a haze, it’s essential to keep in mind that it is about experimenting, and not trying to find the perfect/correct answer. Just trust the process! It will take you to create something that could help your end-users, and in our case, making an extraordinary game experience that could make learning fun for kids. I hope that you’ve enjoyed this sharing and that our experiences have provided you with some insights on running a design sprint remotely.

Written by: Khor Le Yi

Team members: Lei Wong Lei, Koh Shenru, Howard Liu

--

--

Ottodot

We strive to make learning fun for kids. Every story is worth sharing, and we hope our articles provide interesting insights for you to learn from.