zrcalnik_ux_website_redesign – 1@2x.png

My role in the product design team has been divided into two areas. The first was to improve the design process. That means tighter cooperation between the design team, company clients, and end-users to get a deep understanding of the requirements. Another aspect of it has been integrating the necessary design process steps like regular usability testing or creating measurement frameworks to get a clearer picture of the success of the design and development efforts.

The second area of focus has been to create new and improving existing features. Activities included research (primary and secondary research), prototyping, usability testing, visual design, and working together with all relevant stakeholders to implement the design solutions into the product.

 
 

Working with Smarp afforded me as the product designer to cooperate with a selection of recognizable companies.

 

Design Process upgrades

 

The first step we took to improve our design process is to establish regular participation of designers in calls with Smarp customers. The topics in these calls have usually been around using the product, so it provided us great insights into the customers' mental models and their views on how Smarp should work.

On some occasions, we booked additional calls with a particular customer to clarify the reasoning behind their feedback or improvement requests.

 
 
Usability testing@2x.png

The next step has been integrating usability testing as a required step in the design process. It's hard to stick to a pre-defined process in a high-growth company. It involves convincing the internal stakeholders, for example, the customer support team, on why this direct communication between the customers and the design team is required. There's a fear of losing control over a relationship that needed to be addressed.

 
 

Implementing a measurement framework has been established as part of the design&development process to provide insights into the success of the feature from multiple angles. There are two goals with the framework.

The first is to use qualitative and quantitative tools to get information on usability, usefulness, and satisfaction. Secondly is to evaluate the success of a feature from the business perspective. Does it contribute to the KPIs of the company, and if so, in what way? That evaluation helps decide how much resources a company should put into further developing a feature.

 
 

Efforts have been undertaken to connect the design team with relevant company departments. One such example is the cooperation with the customer support team that receives and documents customer questions and complaints.

The design team is becoming an all-present entity across departments and within the company. It strengthens the position of the designers and, at the same time, provides insights to company stakeholders on what design and the design process look like.

 
 

The focus has also shifted to accessibility. Together with creating a comprehensive design system, we started taking steps to make the product accessible across all platforms (iOS, Android, Web).

In addition, discussions started on creating inclusive design solutions. For example, replacing the two input fields for first and last name with a single one that says full name.

 

Posting Feedback Feature Update

 

Smarp is an employee advocacy product. Employees from customer companies propose posts relevant to their own companies and appropriate to share through their own social media channels. Administrators review the posts and approve or reject them. Sometimes admins want to provide some feedback to the proposed posts by employees.

Employees didn't get any notification about feedback from admins. Understandably that caused complaints.

 
 

In the beginning, I've created an internal stakeholder map. It includes people that are involved in the feature. Usually, it's the POs, developers, specific people from sales and customer support, and to some extent, executive-level people.

The stakeholder map lists all the relevant individuals and their perspectives on the feature. It's a guide to how to sell certain design decisions. An important customer has required the post-feedback feature update, and the PO values that customer's opinion. Highlighting that fact in the stakeholder map made it easier to sell design decisions. Recordings of the usability tests with the important customer were used to propose certain design decision changes that took more time to implement.

 
 

Although the missing in-app notifications that would inform people when their posts receive new feedback from admins seemed like a quick bug, we still agreed to do research first. The goal has been to look for opportunities to improve the experience of communicating between the admin and people that propose posts. A research plan has been put together identifying the issue, the goal, and the steps needed to complete the research and design.

 
 

One area of focus has been finding out whether we could categorize the feedback from admins to people who propose posts. Consequently, we could update the post creation flow to remove the unnecessary.

Manually reading through more than 1600 feedback took some time and provided valuable insights on how the feedback feature is utilized. For example to people that get more than one negative feedback stop proposing posts? Are admins upset when they have to provide a lot of feedback to many people?

 
 

Recruiting the appropriate people to test particular features is a challenge. Multiple methods have been used. One of the most successful ones has been recruiting people within the product.

We integrated a call to action for one of the feature updates that invited people to try out a prototype of the planned updates. The implementation of the CTA didn't require any back-end resources. By setting up the Mixpanel events appropriately, we collected the emails (people have to be signed in) that we then used to reach out and schedule sessions.

 
 

More than 27% of people that saw the CTA ended up having usability test sessions with us. It's been by far the best method to reach out to interested people since the CTA has been placed within a flow that those people use regularly.

To improve the customer experience even further, we reach out to every usability test participant once the planned updates are live. In case we implemented a suggestion from a participant, we highlight that in the email. Participants experience an emotion that their input matters.

 
 

Providing feedback to a post is a two-way communication flow between the admin and the person that created the post. It can also include more than one admin and is tied to the context of approving or rejecting a post.

The proposed improvements had to accommodate all the possible interaction flows, and visualizing those flows helped with that.

View an Example of a User Flow

 
 

Before conducting a remote usability testing session, we created an interview script as a guide during the interviews with people testing the prototype.

View a Script Example

 
 

Remote usability testing has been conducted for post-feedback. In total, 12 people have tested the prepared scenarios. The participants shared their screen and talked aloud while going through the prototype scenarios.

Two iterations have been tested by using remote usability testing.

 
 
Feedback - Thematic Analysis@2x.png

With a thematic analysis, we uncovered additional areas of the feature that need improvement. Success, error, quote, observation, confusion, and impression tags have been used to categorize the responses and make it easier to document the findings in a research report.

 
 

One of the surprising findings has been the preference for email notifications vs. in-app notifications. Initially, we planned to fix the in-app notifications and use that feature to notify when new feedback is available.

The findings made us change our plans and prioritize email notifications.

 
 

Each usability testing participant received a thank you email. In some cases, we were able to specifically highlight that we implemented the feature based on one or more comments that the participant made during the call.

Although the post-feedback feature update didn't include the proposed UI changes, the research efforts still had an impact on prioritizing the feature to email notifications.

 

Measurement Framework

 

On an organizational level, there's an understanding that product updates need to be measured. At Smarp, it was possible to integrate the measurement framework for each particular new feature or feature update.

The goal of the measurement framework is to provide information on the usefulness, usage, satisfaction, business effect, and usability of a particular feature.

The first new feature, that I’ve been leading the design efforts and where the measurement framework has been implemented, was the chat feature available on iOS, Android, and Web.

 
 

At the center of the measurement framework is the North Star Metric (NSM). It's the key metric based on which the success of the feature is going to be evaluated. For Smarp Chat, that metric is message read. At that point, the chat feature brings value to the sender (her message is being read) and to the receiver (she receives information with the message).

 
 

The next category consists of metrics that relate to usability. Task success rate and time to complete for the key flows such as creating a new 1-on-1 or group chat, sending a text message or an image and flows involving the search input field.

 
 
Satisfaction@2x.png

CSAT, NPS, and user interviews are the methods selected for measuring satisfaction with the chat feature. CSAT and NPS appear randomly after the person completes one of the key flows.

 
 

Metrics in the usage category are a mix of default metrics (retention rate, messages sent per user, stickiness) and custom metrics for the chat feature. Time to value is the delta between the person onboarding the chat feature and reading the first received message. Another metric is key driver adoption. Based on research there is a certain organizational user role that is responsible for adopting chat throughout the department.

 
 

The business KPIs category consists of metrics that are important to the organization itself. The reason for including those metrics is to figure out whether a particular feature affects key business metrics. That helps with the decision on future investments of resources in improving the feature.

 

Visit Smarp