The story of how I ended up working at Pluralsight and ultimately led Discovery Products is a long one. I interviewed for a UX job, but was actually hired to be the Product Manager for AI in the fall of 2017. The R&D case study for computer vision and machine-assisted video editing at GoPro got me the job. I guess you could say that driving emergent tech initiatives was my gateway drug into Product Management. I am comfortable with massive unknowns, and while I’m not an engineer, I strongly believe in creating an environment that enables engineers to practice creative problem-solving. This was the R&D mindset I brought to the team.
In 2017–2018, while the Product organization pursued some ephemeral AI project, I bet on building the foundation. Although I wasn’t particularly enamored with the search problem, I understood it as a core platform capability. At that time, it was evident that there were 4 fundamental problems worth solving at Pluralsight: Tooling, Content Management, Data Standardization, and Discovery.
It was easy to talk myself out of working on AI. At the time, the organization wanted to tell the sexy AI story at a sales conference before truly understanding the data dependencies. Early research told me that it would be a long road to personalization, let alone any kind of compelling AI. There was a whole layer of data strategy missing to power personalization, content relevance, learning behavior, and instructional design. It was a problem too big for any one person to solve. I asked many of these questions early on, but because I was a newbie, and the only product manager working remote, I assumed that others knew things I didn’t.
In my first year and a half at the company, I suffered a kind of blindness — a lack of context because I wasn’t part of hallway conversations. I was not onsite, so therefore not seen. But I told myself that my advantage was that I could build an island. My team would be left alone and have the safety to try, learn, and fail.
Replacing Search
Six years ago, Pluralsight used third-party solution for ranking, Adobe Search and Promote. It was a black box. No one understood how to tune relevance. Part of the old Search was in the monolith and another team owned the frontend. It worked adequately at the time for the single use case of finding video courses by broad topic or technology, but the experience wouldn’t be able to flex and scale to accommodate the growing library of content types and future sales initiatives. In addition, there were many silos of browse-like experiences. Each owned by a different team and each addressing a slightly different discovery use case. It was the perfect storm of Conway’s Law. There was no one source of truth.
For me, Search was a way to start over and think about the information architecture of the platform. I discerned that over time, search would index more and more in its data pipeline. In effect, swallowing the platform from the inside out and become a centralized discovery platform. My team privately coined the term, “HAstile Takeover.” Any topic exploration, any discovery use case would lead you to Search. It was not anything revolutionary or new. It was foundational.
Building a Search team
When I took on Search in Jan of 2018, there was no Search team. No engineers nor designers were assigned to work on Search. There was a single senior engineer who maintained the backend part time. Hiring for Search was lower priority than hiring for other teams deemed more strategic. This, combined with my position as the only remote product manager, compounded my lame duck position. I spent half of 2018 painting a vision for Search and advocating for an engineering team. I told myself that if I didn’t see any movement in staffing Search, I’d have to look for another job. By Q3 of 2018, we finally had a full mob of 4 full stack engineers.
It’s important to note that building search required the team develop expertise in building data products. Mosts of the engineers on the original Search team were relatively young in their career. They started from scratch in their understanding of information retrieval, the tools, technology, what data signals could be leveraged, what relevance meant, and how to test it. Early on, some of us purchased an out-of-print text book from UC Berkeley, read white papers about Exploratory Search from Microsoft, and attended workshops and conferences on information retrieval. Working on Search was like attaining a post-graduate degree on building data products. There were no internal experts to turn to for help.
Four Years on a Spider Chart
One of the biggest challenges about working on Search was to explain to others how it works and what we do. Explaining how content Search works was like trying to explain the mechanics of adaptive cruise control. You experience its agency through experience. I created this spider chart in 2021 so that I could explain the pillars or jobs that Discovery performed to new leadership.
I shared the chart above at fireside chat last year and someone asked, “How many of these dots did your team know at the beginning?” And I said, “None.” Over the years I had created several other versions of the spider chart with fewer axes. Now that I have lived through this, I can say that some of the dots warrant their own white paper. It is important to note that this is a bird’s eye view of the types of things we worked on. This chart lacked the dimension of depth, as some of the features were capabilities that we continually optimized or iterated on throughout the years.
The 5 Jobs of Our Discovery Platform
The axes represent the types of jobs our Discovery platform does in order to serve our customers (both internal and external).
Machine & Data Science Features: Many of our features are generated from machine learning models and data science. These include our homegrown Learn-to-Rank model that powers Search and recommendations that promotes topical and content explorations. This axis actually includes research work around the models (R&D)and then getting the model to run in production (applied). We operationalized ML work. There was always a research track to explore new ideas and models.
Data Ingestion: Data programming and management formed the backbone of our Discovery platform. We ingested various content data types, such as video courses, learning paths, and hands-on labs. Additionally, we indexed tags, taxonomies, and libraries, ensuring that all relevant information was accessible, filterable, sortable, and browseable. The ingestion pipeline indexed, cleaned, and stored data, then mapped content data to a common schema so that all content types could be compared and ranked in one list. In 2021, we invested in overhauling our data architecture to accommodate indexing more sources of content through acquisitions and partnerships.
LEGOs: Part of our job was to enable other teams to quickly add value. We did this by providing LEGO pieces that other teams can use. These included navigational components, search and recommendation APIs, and design patterns.
Solution Engineering: This encompassed the development of user-facing experiences. These experiences utilized lego pieces from Search, but there was also a UI architecture. This enabled us to dynamically generate any browsing type pages based on topic, access, or role.
Query Understanding: Self explanatory. There was a lot that went into how Search recognized your intent when you type in the query input box.
Why are the polygons skewed?
When I showed our CTO an early version of this diagram, he asked me, “Why is the polygon skewed to one side?”
There were primarily 4 reasons:
- Resources and expertise: Early on, we were too small of a team, still learning how to build search and scale and data platform. The first polygon in the center was the first version of Search, built by the original team of 4 engineer. My objective for the team then was “parity,” meaning “Try to make the new Search not suck as bad as the old one.”
- Lack of data standards and poor tagging: This posed significant challenges for us. Since our search team didn’t have ownership of data governance and tagging, the spider chart skewed heavily towards machine learning and data science. Despite the absence of data standards, we explored alternative methods to enhance relevance.
- Org design: It wasn’t until early 2021 that our team officially took over browse and navigation, leading to the expansion of the Solutions Engineering axis in the latest stage. It took 3 years for the organization to realize that it made sense for a Discovery team to own every discovery use case. The information architect in me predicted it all along, but reality seldom unfolds exactly as expected.
- Business Strategy: Throughout the years I led Discovery, the types of searchable content and their inherent metadata structure multiplied. In addition, in 2021, we offered bespoke libraries based on purchasing plans. This impacted the Data Pipeline axis.
There were many other hurdles besides the ones listed, but I don’t perceive them as obstacles. If anything, they gave me a profound appreciation for the team’s ability to create with possibility.
The Team
When the first engineering team was hired to work on Search, someone said, “Ha didn’t get the team she wanted. She got the team she deserved.” I wouldn’t have it any other way. It’s undeniable that I wouldn’t have become the product leader I am today without the team. I learned a lot of lessons — technical ones, business ones and organizational ones. I learned that my gut was highly reliable. I learned to lead through COVID and acquisitions, but I couldn’t have done it without a team who trusted me.
But what I value most were the human stories, those invisible dots interwoven into that spider chart. All the times an engineer made baked goods when I traveled to Boston, the arguments around approaches because people cared too much, the time our DevOps engineer said we find truth through friction, when I had lung cancer in 2019 and texted the team from my hospital bed… I couldn’t chart these moments explicitly. But it’s what I remember when I look at the spider chart.