Case Study

Recommendation Platform Rework

Unpacking the black box of algorithm

Context

“Algorithm” is a word that doesn’t mean much to the average person: it’s mysterious, and at the same time it remains ever-present. Mostly associated with social media feeds, algorithms are essentially programs that can be combined to do different things, like recommending personalized content to users of any digital service.

Globo is the largest media and communication conglomerate in Brazil, segmented in several areas and brands, but with a strong unified core. Its digital products amass millions of users, and it puts out several experiments every year, for which it needs a combination of algorithms. That need has increased dramatically over the years. The old recommendation admin platform was hindering the performance of data engineers and scientists. It sorely needed a redesign that made it more user-friendly and optimized performance.

To comply with my non-disclosure agreement, I’ve omitted confidential information in this case study. The information that follows is my own and does not necessarily reflect the views of Globo.

My Role

Product Designer | Globo

User Research, Quantitative Analysis, Dynamic Creation and Facilitation, Visual and Interaction

Team of 2 designers

Feb - May 2022

The Challenge

Optimization

Creating an MVP redesign of the Recommendation Admin to reduce product reliance and improve product recommendation.

Versatility

Including users that were not tech-oriented, with a platform that is easy to use and understand, without compromising functionality.

Design Process

The first step was to assess the old platform's strengths and weaknesses. The platform had obvious problems and was not suitable for non-developers: it was hard to understand and it required a lot of time to get familiar with. From a designer’s perspective, it was difficult to ascertain how to make such a technical device more user friendly, so we moved on to benchmarking research to propose a few hypotheses for the user research phase.

Several of the hypotheses that came up in this phase were confirmed later on: admin language was not accessible to all users, making it unclear to many teams how they could optimize their use of the platform and how to understand the algorithms’ impact on the performance of their products. Overall, the old platform was too opaque for users who weren’t already deeply involved in it.

The user research was designed to help us understand the users' pain spots, needs and processes. We interviewed 5 data scientists from each of the 5 products that use recommendation features, 1 product owner, and 2 machine learning engineers from the recommendation team.

Top Insights

  • Almost all responders were dissatisfied. They see the recommendation features as vital to the products, but they don’t understand how they work or how to improve access to them.
  • The generated output was satisfactory, but the process to get there creates confusion, wasting the users’ time.
  • The terms “recommendation” and “experimentation” are often confused.
  • The data scientists want more autonomy to fully investigate the results of each recommendation, with easy access to the data.
  • The algorithms are not documented, creating an echo chamber inside the platform.

Having unearthed this information, we decided to run a design sprint, focusing on the definition and execution of the project ahead.

We presented the material gathered thus far on the first day and decided which problem we would prioritize: how to configure a new recommendation experiment. On the second day, we did the sketch, refined our selections, and created the storyboard. The goal of the third and fourth days was to create interfaces and prototypes so that we could conduct usability testing on the fifth day. Everything went as planned, and we managed to come up with a good solution in a relatively short amount of time.

In retrospect, the sprint could have been better planned to have a narrower scope instead of addressing the whole flow at once. The results were good, but the process did feel overwhelming at times and we had to work harder to be objective and keep the sprint flowing.

Solution

The final layout was a 6-step recommendation experience that simplified the process of combining algorithms to create experiments. This made the process less overwhelming and easier to understand, improving usability without sacrificing the essential tools that developers were already familiar with.

Usability testing was key to ensure that the process worked for all types of users, so we were diligent in interfacing with the development team. Refining and implementation followed, ensuring that no one would be left behind in the transition to the new platform.

g1 app screens for the election

Results

Unfortunately, I left the Recommendation team before implementation was completed, so I didn’t have the opportunity to oversee the results as fully as I would have liked to. However, I am satisfied with the final product and confident in its quality. The new platform has the potential to improve the independence of data scientists, saving time and, consequently, costs, especially when it comes to onboarding. It also presents opportunities for data scientists to streamline their process, since they can interact independently and experiment freely.

As a designer who had no background in recommendations and algorithms prior to this project, it was a great opportunity to learn about what goes on behind the scenes of the products. Applying UX to such a technical area helped me expand my approach to usability in my design process and made me value the development process more. The learning curve was intense, but I came out on the other side seeing my role in design with new eyes.