Reimagining Settings Interfaces by Rethinking User Interactions with AI Agents

Foto del autor

By Matias Villagran

January 29, 2025

The Future of Settings Interfaces

For decades, settings interfaces have remained largely static, even as design and technology have advanced. They often feel overwhelming, unintuitive, and frustrating to navigate. With the rise of LLM-based agents, however, we now have an opportunity to fundamentally change this dynamic.

Think of your typical settings screens: the search functionality is rudimentary and it has been like this for decades. Searching for specific settings is difficult because of technical jargon and terminology within the user interface, which highlights the disconnect between the system’s internal model and the user’s mental model.

Haven’t you ever used the search field to look for a specific setting, only to get irrelevant results? No matter how many different terms you tried, you still couldn’t find what you were looking for.

But what if settings interfaces could adapt to your behavior, learn from your preferences, and evolve over time? This idea emerged as we explored the concept of designing the level of agency in AI Systems. It led our innovation team to a series of experiments with AI-driven agents aimed at reimagining how users interact with application settings—and perhaps even redefining the settings experience altogether.

Core Features of a Settings Assistant Agent

Context Awareness

The Settings Agent doesn’t need to live in a separate screen: configuring settings can happen anywhere. This agent understands the screen content and context, adapting its guidance accordingly. In our experiment, an AI Assistant, present in every screen, is requested to notify the user every time an external action is about to happen. The agent understands the request in natural language, and proceeds to change the notification setting based on the content that was displayed on the screen.

This capability eliminates the friction of navigating menus, enabling users to ask directly for explanations or solutions in their context, making the experience more fluid and intuitive.

Learning and Personalization

A truly impactful agent isn’t static—it learns and evolves over time. We explored the concept of a learning agent that proposes changes to the user as it becomes more attuned to their preferences and behaviors.

We focus on experimenting with an onboarding experience, where the initial user trust can be low and he can hesitate to delegate too many actions to the Agent. The user can easily manipulate the system agency by prompting the settings assistant. Then, after a few weeks, the system can proactively propose to automate more of the actions.

To maintain clarity, we experimented with distinct UI elements that differentiated user-initiated prompts from bot-initiated suggestions. This approach ensured transparency, helping users quickly identify the source and purpose of each interaction.

Distinct UI style differentiated user input from bot-initiated suggestions

By adapting to user habits and learning over time, the agent fosters trust and encourages engagement, making the system feel less like a tool and more like a personalized assistant.

Control and Transparency

Transparency is crucial for user trust. To reinforce this, we designed the agent to provide users with direct paths to a traditional settings UI to access and modify settings it created or suggested. For instance, upon creating a new configuration, the agent would immediately offer a link to revisit and edit that setting in the future.

We also added the option to Undo and Modify actions performed by the agent to reduce interpretation errors. This feature enhances user control, ensuring that even as the system automates tasks, users remain aware of how to manage and adjust these automations independently.

Multi-Context Replication

One of the challenges introduced by our proposal of accessing and modifying settings in context is the need to ensure consistency across similar contexts. To address this, we explored a dialog for the agent to offer replicating changes made in one context to all relevant contexts, streamlining the process and saving users significant time and effort.

Dynamic User Interface Adaptation

The most ambitious feature we explored involved dynamically altering the interface based on user prompts. Users could request specific graph information or interact with graphs directly. The agent would not only display the requested data but also add new control components based on existing ones, allowing users to access and configure these settings in the future while keeping coherence in the interface.

This approach pushes the boundaries of traditional interfaces, creating a system that grows and evolves alongside user needs, opening up possibilities for a truly adaptive and dynamic experience.

Technical Challenges

Implementing AI-driven adaptations to settings interfaces raises a range of complex technical questions that demand thoughtful consideration. Talking to the technical team brought up some interesting challenges to think about:

  • How can we develop software that prioritizes user-driven control and adaptability over rigid business rules?
  • What mechanisms can efficiently store and retrieve both user-specific and organization-wide adaptations?
  • What resources does the Agent need to dynamically modify screens in real-time?
  • How will these modifications integrate with user assistance features and existing customer support systems?
  • How can we enable useful adaptations to be shared across users in the same organization while preserving individual customization?
  • How can user-driven adaptations inform continuous product improvement and broader system evolution?
  • What strategies ensure data security and privacy while enabling flexible software adaptations?
  • How do we maintain software integrity and prevent errors while allowing Agents to freely modify settings and properties?

Addressing these concerns can be key to using adaptive interfaces while maintaining reliability and trust.

How Layout Alternatives Support These Features

To bring these features to life, we experimented with different layout approaches that align with specific user needs:

The Floating Experience: Based on a modal component, this experience is ideal for quick, focused queries, allowing users to interact with the agent without disrupting their workflow view.

The Search Bar Experience: Merges familiar UI patterns with advanced functionality in an omnibar, enabling users to type prompts and receive context-rich responses due to the extra real estate available in comparison with the first experience.

The Sidebar Experience: Offers a more immersive interaction by integrating the agent into the main interface, supporting human-agent conversations, more complex workflows, and visualizations.

Can Agentification Transform How We Control Apps?

Our experiments reveal how AI-driven agents can turn the settings screen from static menus into dynamic, responsive systems that adapt to users’ needs. But this shift isn’t just about better usability —it’s about rethinking the relationship between humans and digital applications.

By focusing on features like context awareness, learning, transparency, and dynamic adaptation, AI-powered settings functionality could significantly reshape how users experience and trust applications’ adaptability.

If interfaces can learn, adapt, and evolve alongside their users, how might this redefine not only the way we interact with technology, but also how we design and implement software?