Nacho: In today’s episode of Chain of Thoughts, we will talk about how the latest AI developments are calling for a redefinition of product design principles. Do standard designs sprints hold in this new landscape? How do product designers and AI experts interact with one another to discover LLMs opportunities? What are the most common challenges when interacting with users and customers with no experience in AI? What about UX in this context? Make sure you listen all the way through to not miss a single point of this.
Welcome to today’s episode of Chain of Thoughts. I’m laughing because we always do like a “clack” at the very beginning and it’s a funny thing. So thanks again for being here. Vico, Calde, how are you doing?
Vico: Yeah, you need to get it again in recording, it’s always a funny part to start recording the episode.
Calde: Yeah, I don’t think the “clack” thing is needed anymore, it’s something from the past, you had to coordinate audio, video, and it was useful for that. But yeah, I think it’s funny to do it, like start, right? So…
Nacho: Yeah.
Calde: I’m happy too that it’s friday, we always record on fridays. I don’t know, maybe you’re listening on monday or tuesday and actually good for you and bad for us.
Nacho: TGIF. But, well, today we’re going to talk about one of our latest obsessions, I would say. So I think that this episode is going to be one of the funny ones and we will intervene all the time. So probably you will see that we will overlap one another. But today’s goal is to talk or to understand product design in the context of AI applications. This is something that we do a lot in Arionkoder and that we have put our minds on for a long time. How we can improve product design in the context of implementing AI, you know? We talk about implementing AI we are talking about implementing something that is slightly different than traditional software in some cases, in some other cases it’s completely different. So all these product design principles that both Vico and Calde know a lot, and that I’m learning in these conversations sometimes don’t hold 100%. So I think that this is one of these episodes in which we will talk about those experiments or things that we are thinking about, and it’s going to be one of the good ones. So I would like to start with a question. It’s an open question. I want to understand what’s your point about the existing designs sprints in the context of AI-based products? Whoever wants to go first, feel free.
Vico: I will start maybe with a little bit of thoughts to get to the point. Because you mentioned that how they do not hold, basically. And that implies that our basic understanding of that, and starts from the premise that they need to be adopted. And that’s already a value appreciation of the process itself, right? And stemming from that question, I do wanna go through the rows in the sense that not designs sprints per se, the design thinking process and the stages it proposes do still stand as valid in a sense. And why do they still stand valid? Because they apply the same frameworks as would do for mobile app development, because we need to consider the intricacies of what it is to build a UX, that requires specific frameworks. But we always do start by the initial steps. And guys, we always need to start by an understanding and empathizing stage in the design thinking process. It will also be important in future stages, I’m thinking of AI. In that sense, I want to bring forward this, that is we abstract far enough the stages of what design thinking proposes, we will go through each one of those. That we cannot try to push the same frameworks we use normally for a regular software design process. Because we have to take into consideration for example where we are on the final stage, for example. So we need to go adapting and understanding what are the frameworks that we’ll need to add and the questions that we’ll need to incorporate, and how we do incorporate also that scientific collaboration with our design teams. And that’s I think where it’s interesting, also challenging, how we will continue to interact and push forward.
Nacho: Yeah, let me just say something. Yes, you’re totally right because in the end, if we talk about AI-based software, we are still talking about software in the end. And we have to implement the software, if it’s a mobile app we still need to sketch different views that you’ll have, different screens and that stuff, so of course those processes still hold. But to your point, it’s really necessary to find out which specific parts of this design process are affected by the fact that we are thinking about AI components. So yeah, I totally agree. Sorry, Calde, you wanted to say something?
Calde: No. Yeah. I think it’s also a matter of attitude towards something new that is new material, like AI, Artificial intelligence. How we can incorporate it into product design because I think something that happens is that at the start there are many different teams and they work in silos, right? The AI team is designing the model, and the product design team is designing the application. And they kind of interact but each one is doing their thing, right? And they don’t talk a lot. And in the best cases there are some conversations about alignment. But I think product design persons, product managers and UX designers, need to start considering this AI thing as a new material that they can incorporate into traditional processes, right?. And that does change the process, it will be like small changes but at least they can instead of just thinking of building the wrapper, wrapping the experience where there’s AI, you can start thinking about the full experience, right? Experience for the user, and how they interact with the model, not only with the application, the interfaces, the model. There’s something really interesting when you start doing that, because you get into the details of how the model works, and then you have to start to learn about the basic concepts of AI. It’s like something new so there’s apprehension, fears, maybe to incorporate that, but something that we can do with this is like, I think a good part of designing the way the model interact with users has to do with workflows and feedback loops and things we already know from our processes.
Nacho: Yeah and yeah you mentioned something that that is also kind of my obsession for the last few months which is that there’s like a problem, like a chicken and egg situation. Because when we think about designing an AI-based product, we usually say alright, don’t focus on the on the UI, don’t go focus on the user experience whatsoever. Because you first need to verify if the AI components are feasible or not, if you’re asking me to develop a tool
that helps oncologists to cure cancer, but you rely on an AI model that needs to determine the exact drug for a specific type of tumor, that’s challenging. And I first need to verify if building that model is feasible or not. So why designing the screens if I don’t have the AI model done before? But on the other hand, all the process of designing the application already gives you an idea of the AI models that you’ll need. So where to start? That’s kind of like the difficult part. So I guess that in this particular context, it’s fundamental to produce a good interaction between the product designers and the AI people, right? That’s kind of the fundamental piece that we need to add to the existing design sprints, right? AI experts that can collaborate in identifying the opportunities, right Calde?
Calde: Right. And I want to get back a moment to what Vico said about understanding, it all starts with understanding. I mean, there’s maybe like a chicken and egg situation of like where to start in terms of what to do first. If you go first to the UI or if you go first to the feasibility of the model. But before that, you need to understand the workflow of those users. And maybe in that sense you can also like explore compromises, right? In the example you were giving, maybe the tool does like, provide precise medication that needs to be used to address some kinds of cancer, but there can be like suggestions in the middle, feedback loops with users that guide the user towards selecting the best medication. So in that sense you will start to think of the interaction between the human and the model as something that happens through time, steps, rather than some kind of simple input that leads to a simple outcome that needs to be the best possible outcome. So there is the need for that in some sense, but it can be something that is created together between the human and the model, right? And I’m talking, of course, about automation versus augmentation that is something that we’ve been already talking about, we have an article published about that and how it applies to healthcare in Arionkoder’s blog, but essentially it has to do that, instead of just thinking of the model as something that we automate for a series of tasks, you can get into the task, learn what are the steps that a user normally follows, and create several artifacts that interact with the model in a way that allows the user to get to something better empower him, right? I’m not sure I answered your question, Nacho, but I think this is important.
Nacho: Yeah, I think that you answered my question. Sorry Vico.
Vico: I think this is going to happen in this episode all the time, people, sorry about that. We’re going to be, like, jumping all the time. Do you want a common point with that? I was going to bring something.
Nacho: No, I was going to say that yes, that answered my question. But at the same time,
I think that by understanding the user journey, right from the very, very beginning, you can also figure out which are the data sources that you might have, or that you might use along the way because, for instance, if you think about this process, this straw man oncologist that we are talking about now, and you say alright, usually the process starts like this. We first do an in-depth review, okay, then which data sources you use to determine the in-depth review. Well, I use PubMed, I use bioRxiv, these repositories, alright. I already have an idea of the data sources you want to exploit. And that already connects me potentially with a solution to that. It will be maybe to do a Retrieval Augmented Generation on these repositories, or implement solutions like that. From your face, Vico, I have the feeling that you were going to say exactly that. So that’s already happening?
Vico: I was thinking about something really connected actually, because I’m really glad Calde brought it back, automation versus augmentation, because it’s a critical part of the decisions, an understanding. We have to understand from the type of task and what it implies for the user, also there is a decision not only from a Machine Learning, technical perspective, but is this a task the user actually likes and enjoys doing? Or is it something that they would prefer to automate? And there’s something there also to take in consideration when you are understanding the problem itself that will be an additional touchpoint for the decision the team will finally take. And you may remember now an additional thing, the other decision you will have afterwards will be ok, are we going to go for… I’m not so technical, so you’ll have to correct me here. But as I recall, it’s when you are deciding, so it would change that. If you’re designing a solution here, the experience design would change completely based on what tech function you decide, basically. If it is for the user, then you will have to design, so these are like technical decisions also that afterwards are taken by the full team. UX, UI designers, product managers, technical team. Because it feeds off of changes and transforms how the user will interact with this. And what they’re expecting also, to get feedback loops. So the thing that’s really interesting now is understanding the amount of decisions being made with these two perspectives: what does the model need, and what does the user need? Because you have an additional need actually.
Nacho: Yeah, definitely. I mean, when you analyze the user journey, you come up with models basically that you will need. You first come up with the hypothesis that you need to verify with the AI models, let’s say, because we first need to confirm that all these AI components that are imagining together with the user are feasible or not. We discover data sources, as we mentioned before, that may or may not be associated with each of these components. But your point is also important. We even put design constraints to the AI models themselves. So for instance, if my plan is to do a screening of a disease, all my samples will be related to healthcare because that’s my field of expertise. But if you think about screening, for instance, you want to reduce the amount of false negatives, because if you’re screening for a very dangerous disease, if you miss one of the positive cases, then tat person is at risk. So that kind of design principles, let’s say, or design constraints for the AI model are basically detected at this point of the designing process. Otherwise, the issue will be that we will end up inventing a solution for a problem that doesn’t exist. So it’s a very important point.
Calde: Yes, and something that happens is that I think that in reality, the processes for designing a model and the processes for designing a product have got things in common. There are departments working in silos but in reality they are kind of doing the same, because we start by understanding the problem, and gaining an idea of how to solve that. And we gain something that is like a proof of concept in the case of Machine Leaning, or a prototype in the case of product design, and we try it against reality, trying to get to some KPIs in terms of product design. It always has to do with increasing the productivity of the application or how the user works with it, but it has to do with adapting to the real scenarios of users, their needs and goals, and in the case of others, you’re looking also to get to a number, like an improvement in the number, and if you fail you go back and work in the model design. It’s the same thing we do with product design. If we fail with a prototype, we go back to our designs and try a new prototype and we advance in that sense, so it’s an iterative process with a lot of similarities and that can be connected, have a conversation to it.
Nacho: Yeah, and now tht you mention it, you’re totally right. When I started to be involved in product-related projects and I started to see all these processes that are related with product, I found out that they were common denominators with how we do AI in the academia, for instance, and how it’s done for PoCs. So it’s not very difficult to involve AI experts in the design process in the end, because it’s just a matter of, you know, connecting the dots and finding out the similarities for both processes. And then just try to augment one another with the best from one side and the other, for instance, this thing that you guys mentioned before about the feedback loops is also one of out latest obsessions with these people here, thinking about the short feedback loops and the long feedback loops. When we talk about AI-based solutions we always collect data, right? Not just for the purpose of understanding if the results of the algorithm still hold and to detect data drift problems and things like that, but also to improve the model through time and to fine-tune the models and to make them better. And that’s the fundamental feedback loop that we always have to take into account.
Calde: Yes, and sorry to interrupt you, right? But I’m not sure if for example listeners of this podcast come from product design or are aware of what short feedback loops and long feedback loops mean, so maybe we can describe them. Nacho, can you describe them?
Nacho: Ok, I will do my best. I feel now like in an oral exam. Right, when we talk about long feedback loops we are talking about we have a task that we want to perform, we don’t want to automate the task because maybe we’re augmenting the process of doing that task with AI so that we can achieve it. So we have a goal and we want to achieve it, and we follow a series of steps that can be automated or done manually, and in the end we get a result. What’s our final goal? To make this process as efficient as possible. So a long feedback loop is when we try to reduce the amount of time or the amount of interactions that we need to go from the inputs to the expected output. That’s a long feedback loop. When we talk about short feedback loops, we are talking about all these human interventions that we do along the process to make sure that we come up with a good solution. So for example, if we are writing a piece of text and we ask an AI model to highlight the most important parts of it, and then we correct that output of the model, then we want to collect that data because in the end we want to improve this short feedback loop with the final goal, which is improving the long feedback loop as well. So right.
Calde: In the last case, the case of short feedback loops, we try to display like a quick reaction to users, like the system reacts accordingly to what the user asked for in real time, right? It’s like the experience that users have with short feedback loops. And that informs the system in a way that makes a loop quickly so that’s why they’re called short feedback loops. So i think that’s important too, right? Because we’re feeding them all but also they have to, they need a manifestation in the experience of the user. Right? And in that sense the interface that can be a graphic interface or maybe an instructional interface like we’re used to doing with prompts.
Nacho: Vico, you want to say something?
Vico: I was just really thinking that I don’t really wanna miss your reflection, I was thinking of how you brought your personal experience like contrasting the process of academia, basic process of creating PoCs or papers even, and design processes. And I really want to emphasize that I think that’s the power of a design approach. So we all use these types of processes, be it when you’re trying to write a paper or even in your thesis or when you’re trying to get your degree. Even designers not in the digital space but also in the physical space. When we go to the basics of the process, it all seems similar or even the same, right? You start runs, different tools are used, and we all understand that’s same way for writing a paper or building a house or digital solutions. But I think that’s when we understand that design is also a mindset especially if you’re thinking about human centered design. So I really didn’t want to miss the chance to, like, emphasize that. And I think our passion for human-centered design has led us to like, ok, let’s reflect and double on this, on how we can empower each stage. And how we learn from our processes in academia and building PoCs to interact in the best way of design solutions.
Nacho: It’s nice to reflect on that point in particular because one thing that we do in academia and that is done in design processes is experimenting as fast as possible. We first need to come up with a quick solution, try it, and them improve over that solution that we’ve developed, and usually one of the things that I try to do when we run design workshops for instance with customers and other things like that is try to figure out how to solve the different tasks or the different problems that are being presented. So it’s kind of a solution,
anything that we do in the background. But because we need to have a clear idea about the feasibility of something, we need to at least design a very stupid solution at the very beginning to know how far we are from the final solution that we might need. And that solution thing is kind of tricky because you need to, let’s say hide that, to the users because they don’t want to know the details about what you are doing in the background, but at the same time, some of the things that you’re doing in the background will affect the next questions that you will ask the users, because it might be that you say hey, I will go for a deep learning solution that needs a lot of data. And then when I talk to a user I learn that this data is not available or it’s kind of difficult to be collected, so then I have to go back to the solution that I had in mind and I have to redesign it. So it’s tricky as well because we need to to start to design the solution without implementing it because we’re still in a discovery phase, let’s say, but at the same time that solution is being constrained by the environment. But at the same time we know that the solution that we developed will end up probably constraining the user journey as well so it’s, again, the chicken or the egg, the same situation.
Calde: Yes. I’m wondering about the times for experimentation and maybe there’s a difference there, right? Like the time that you need for experimentation in terms of machine learning model, the time that you’ll need for experimentation in product design. I mean, it seems like product design works faster in that sense, like get something validated fast. In some experiences you can do it, or build like a big proof of concept that allows you to try something, make an experiment, learn from it and iterate at the same time, but in some other cases there might be a need for time in the middle, like for experimenting with AI. And so what do you think, Nacho, about that?
Nacho: I think that time always, I always say the same, it really depends on the problem. I always close discussions saying that. It always depends on the data or the problem. But yeah. To your point, Calde, I think that the most tricky part is that for instance, you can mock up an app relatively easy right now because we have lots of tools ans can even use AI for doing that, so that design experimentation is relatively fast to be done. The AI part is also quite fast now, let’s say, but it really depends on the problem. Because for instance if you’re focusing on a text-based application, you probably won’t need to train your own model. It’s just a matter of paying OpenAI for this Open API and then write the rights prompts to do this solution, to implement a solution, give it a try. Maybe you can get a glimpse of how far you are from the final solution because for instance, we have learned that in the past, we have implemented prompt-based solutions that are far from perfect, but we already have some insights on how feasible this task is. And then there’s another layer of experimentation as well that needs to be integrated here, which is alright, I can give it a try, I can implement these PoCs in a relatively short time now that I’m using LLMs, but I also need to put those models in the hands of the users as fast as possible too, because I want to get feedback from them, I want to understand whether this solution that might be good or not in my opinion is good or not for the user in particular. And for that we already have tools as well, we have things like StreamBit, for instance, the Python library that allows you to know insights about how to do UI design, to come up with a straightforward interface that you can interact with and you can use to try the models, and that’s something we actually do a lot in Arionkoder. So when a customer comes and wants a PoC we usually go that way, we design a pretty basic user interface using StreamBit and we connect the algorithms so they can interact and declare themselves victorious or ask us for something else.
Calde: You’re telling our biggest secret, shhh! We are using more and more, right? These PoCs for training something and validating it before going forward, and as always in software that part about fast iteration, very fast, and discovering the actual problem you have to solve through creating something.
Nacho: Yeah, I think that we’re running out of time but I’d like to to add a few more things as well and maybe we could discuss that later on in a future show. But this particular thing about failing fast, I think it’s super important in the context of AI, probably it is more important than in the context of traditional software. Because in traditional software you can fail, you mis-scoped, but if you missed the actual scope of the solution that you actually want to implement, then you fail and you re-design and at some point you will converge. But in AI, when it comes to AI we’re talking about an experimental technology. It’s something that requires a scientific approach to Vico’s point. We need to frame the hypothesis. We need to give it a try with a few solutions and we need to measure how effective those different solutions were, we need to present those numbers to the customers so that he or she can say this is okay, this is not. And then once we declare ourselves victorious or not, then we can move forward to the other part which is also a funny part, and it’s one of my favorite parts because when you start interacting with the user you start learning about some other challenges that might appear, and a product ends up becoming like an infinite, growing thing that keeps incorporating more and more features and solves more and more issues, maybe it’s completely different at some point from what you has in mind at the very beginning, but it’s valuable to your users, which is the end goal, right?
Calde: Yeah, we’re laughing with Vico because I’m pretty sure there’s a button that you hit it and it says “Product’s never done!”. Yeah, it’s never done.
Vico: I was going to say that.
Calde: Yeah.
Vico: We should have like a banner saying that, Product’s never done. I was sure 100% you were saying that and probably this episode could also be called that.
Calde: I was saying, let’s do this. I’ll count to three and we’ll say it, the three of us.
Nacho: Yeah, let’s do it.
Calde: Three!
All of them: Product is never done!
Nacho: I think that’s our cue to finish the episode. So thank you very much guys for being here today, and thank you to the audience. If you made it to this point, that means that you listened to our episode all the way to the end and we really appreciate that. We have covered in this particular show some of our latest obsessions around product design so thank you for being here. If you enjoyed this episode, please leave us a review on Apple Podcasts and Spotify or comment in the box below if you’re watching us on YouTube. And remember that Chain of Thoughts seeks to build a community of product designers, developers, and AI experts, to keep creating great digital tools for customers and users. So to keep this community growing, please, like, follow and subscribe to Chain of Thoughts both in Spotify and YouTube and of course on Apple Podcasts as well, and share with your own followers using the hashtag #ChainofThoughtspodcast on X and LinkedIn. You can also follow Arionkoder on social media platforms to keep yourself updated about the show and our actions as a company, and see you soon for the next piece of the chain.
Vico and Calde: See you soon!