Torvalds - The AI powered advanced support system for users to chat with their knowledge base and more.

August 2023 - August 2024 • Lead Product Designer

What is Torvalds and what was my role?

Torvalds is an AI platform that aims to revolutionize customer service. Users connect their knowledge base (Docs, Notion, Confluence, ), ticketing systems (Zendesk, Jira,) and messaging apps (Slack, Teams, Discord) together.

On top of that is an AI layer where a CSM can ask any question about their knowledge base.

This is the big AI enterprise play that a lot of other companies like DevRev and Glean are doing. Hook up all enterprise data, add an AI search layer on top, and give people the answers.

I was the lead and only designer for the project. I worked directly with the founder and his team of 12 engineers to bring this project to life from just a small admin panel.

We started from just as simple chat interface like ChatGPT, and grew it into a platform that has paying customers.

When creating the design system, I wanted something that was minimal but looked sleek in dark mode.

Like many other designers, I think Linear is beautifully crafted with their simplicity and I was definitely inspired by them. I worked with the founder of Torvalds directly, and he wanted a dark mode app.

However, I felt like a lot of new SaaS startups are adopting the Linear/UntitledUI/Shadcn kind of vibe. Unfortunately for me, I felt inclined to go through the arduous process of making everything from scratch for full flexibility.

So I'd say the overall taste of what I was trying to achieve was Linear for the aesthetic simplicity, Material for the Icons + Font style (Inter is visually close enough to Roboto), but with a tinge of an Atlassian "enterprise SaaS" style aesthetic.

During the entire project, I would deliver new prototypes every week. Each prototype would be delivered in high fidelity and be fully clickable. I kept everything consistently moving forward.

Looking back one of the big takeaways is you just have to keep building, one week at a time. Create something, get it in front of people, refine, and continue the process.

We first started out creating a chat UI.

ChatGPT, Perplexity, Claude, and Bard (now Gemini), were the main AI products that we looked at. At a high level, we wanted a GPT style chat interface where a user could ask anything about their knowledge base.

From a pattern perspective, those apps as well as Slack & Teams created simplified patterns that could be used.

• Chats on the left side with an ability to take actions on each.
• Actions per chat (copy, like, dislike, and confidence score)
• Show sources for each response

Perplexity had a PhD approach and in the beginning insisted on showing the sources. We loved that idea so users didn't feel that the AI responses were just some black box.

However with Perplexity, the sources were displayed as a drawer. Although I liked that idea as it still gave the chat context, the founder and I thought about how we may approach this differently.

We wanted to emphasize in a way this was less a just "oh btw here are the sources used", and more of a "search". The query string by the user was no different than a search string in Google, where you could go explore further across sources. The idea was to not only create more engagement, but have this feature used as an enterprise search. This was especially apparent in the Intercom style JS widget we developed also.

The goal was to have the bot on all platforms:

(a) a native app
(b) Slack/Zendesk
(c) a widget that could be placed on a documentation website (like Intercom)

For the engineering team. I provided all the interaction for Zendesk (using their Garden design system) as well as Slack.

The idea was that the experience would be the same.

(a) Have the same output
(b) Have an ability to view sources
(b) Have all the actions (like, dislike, etc.)

The docs bot was a little different in that it would ultimately be as close to the external website's CSS, so the experience felt seamless.

We didn't have to go too crazy as there is always a decision of engineering resources and scale. So as a designer working for a startup, I thought about are simple things we can incorporate where your average person didn't feel it was this weird "out of place" widget inserted into the site.

In the example below of Redpanda, I'd incorporate things like a red hover state, or making the bot's avatar be the Redpanda mascot.

We gave customers the ability to manage their integrations and AI prompts within the app.

Rather than having all the prompts done behind the scenes by internal engineers, the idea was to expose a lot of the controls to the users. This gave the technical user more control over the bot's output.

One thing I thought about when working with the founder was variations of prompts. There could be prompts at the org level (manage organization), but users could create individual prompts, and from there even have another level down of variations. From there, a user could decide which variation to make active. This flexibility ensured bot output accuracy.

The next thing I thought about and designed was how to clean up the tribal knowledge that feeds the LLM brain.

In order to deliver high confidence AI responses, the knowledge base needs to be accurate and up to date. One of the big problems we learned, as several customers started using the bot, was that the answers were not always 100% confident.

In one case, there were up to several thousand internal Confluence articles that were not always blessed. This contrasts to an official help doc article which usually was written by a documentation person who ensured it was well crafted.

So as a designer, I thought hard about how we could address this. Below I designed the area in the platform where we could manage all of those articles automatically.

The first thing we did is automatically purge articles that don't aren't used in queries, and deprecated old articles. Ones that needed additional review could be manually looked at, and then manually removed from the LLM brain.

Once we had a blessed knowledge base, we wanted to surface the top level problems to the customer. The idea was to say: look, i'm a customer service manager. I'm busy and don't want to have to dig around. Just tell me what my top issues are on my platform.

So we created this and put it in production. It was achieved by clustering customers and issues pieced from platforms such as Zendesk, Jira, Slack, together. Here the UI could default sort by Issue number.

From there, the idea was to continue to surface any top level reporting topics such as:

(a) knowledge gaps found
(b) common trends
(c) knowledge problem or product problem?

The bet was as LLMs commoditized more and models like Deep Research could deliver better results, the issue detail page have greater accuracy.

If users have articles across many platforms such as their website, Notion, and Confluence, it would be great to be able to see them in a single catalog. A user could drill in and see the article details.

Users could see all of their articles in one place.

Since knowlege bases could consist of docs from Google Docs, Notion, Confluence, and others, we wanted the users to be able to have an article browser. They could drill in and see their articles details.

The problem is: What actual part of the document is incorrect? What if there is a piece of text or content in a document that hasn't been updated, and the internal experts who know the answer are not informing the documentation team?

I thought of the idea of a knowledge base article diff, where certain pieces of text could be called out as being in conflict with what the LLM brain knows.

In the example below, the article had not been updated with how a user could log in. Torvalds had received new information from experts commenting in Slack on the new login updates, but hadn't updated the docs.

This diff was highlighted, along with an AI proposed written change, and the sources for the suggestion.

Each article could have a "freshness score". No diffs detected meant a 100% score.

A company could have a blessed knowledge base, but things change. Internal experts have new knowledge, and it becomes a constant process to get those experts to put knowledge into articles.

This feature automates that and detects it, streamlining the process with AI.

Those little pieces of data that Torvalds ingests from integrations - I called them snippets. And to promote transparency (rather than the system looking like a black box), I designed an interface that broke it down.

It showed how Torvalds:

(a) ingests snippets
(b) determines facts or filler
(c) sees if that snippet is known (in the knowledge base), brand new and can be correlated to an article
(d) sees if its in conflict and then creates a knowledge base diff

The founder of Torvalds had an idea to see a leaderboard as a way to incentivize users to write articles. We could track article updates to Notion, Confluence, etc. and rank them in the Torvalds app with a proposed payout.

One of the key metrics I proposed early on (that we put in production) was "Time Saved"

Both Google and Perplexity used total queries as a north star metric. In our case, the time saved metric was something we included in any marketing material.

We aimed to disrupt typical if/else workflows by creating circular logic for each app.

A separate feature we played with but didn't ultimately pursue is how we could disrupt the typical workflow builders of today. Typically, they consist of a bunch of nodes which represent an app function (ex. add user to ADP), and if/else lines connecting them - a giant node builder to execute automatic tasks.

The problem is if there are errors, an org has to go have a developer fix it.

With AI, the idea was that it could simply read the logic as written english in a circular fashion. This empowers anyone to write workflows with just text.

One of the promises of AI: Having it give you analytic insights.

AI summaries are becoming more ubiquitous. I've seen them in Yahoo Finance, Slack, and more. In my previous work designing analytics reporting areas, users always wanted to know what the trend was so they could take action. In the pre-LLM era, I know Google Analytics had an AI Insights button that would attempt to do this.

We wanted users to come to the analytics area and be spoofed the top trends they need to care about before investigating further.. This is similar to the Customer Issues section earlier. Here's all this data, just tell me what I need to know.

PLG growth meant creating the full landing page and ads.

The first paying customer came from a referral, but the others came from cold emailing. I designed the website in a more straightforward way that emulates SaaS websites:

I took a lot of screenshots of existing SaaS apps for inspiration as well as looked at the anatomy of a landing page (credit to Supafast):

(a) heading (centered like the Linear approach we see everywhere nowadays)
(b) logos for trust
(c) features
(d) testimonials
(e) integrations
(f) final CTA

For all these pages, this would include all responsive sizes. Aside from the ticker of the logos rotating (a common animation), I didn't really design with a lot of animation in mind. Maybe a fade in on view effect here and there. Those would come in a future cycle.

When it came to ads, such as these LinkedIn ads, I tried so many variations.

One thing I've learned from marketing is every word, every pixel matters. UXers are also in the business of creating imagery to get humans to do things.

I would take one style, then play with the copy many times over and determine if it evoked a sense of me needing to click to see more.

Here are a few examples of final versions we created. The balance was creating good copy, an image that conveyed the value, as well as branding it with good logo placement.

This was a great project where I got to do it all.

This project really made me feel as if I leveled up as a designer, so I'm glad I went through the journey. We started as just an idea, and grew it to paying customers.


I did everything from product design, marketing one pagers, ads, product demo videos, pitches to VCs, and much more.

Sometimes it's a lot. But I like to rationalize it that it's just giving me more XP in the game of leveling up my skills.


The biggest reinforcement of it all was building something, and getting it to people to get feedback. You have to get it into the hands of people to see what they say. Once we started doing that, the progress of building exponentially grew. The business challenge as a designer and a strategist is the trade-off between what goes in the platform versus not. It becomes very seductive, especially as a founder who wants the business to keep growing, to include anything a customer asks.

© Shawn Khodai 2025

© Shawn Khodai 2025