Agentify My Search Bars! Building More Straightforward Search Functionalities with LLM based Agents

Foto del autor

By José Ignacio Orlando

December 13, 2024

Search bars are the unsung heroes of modern applications—silent, efficient, and often overlooked until they fail. They’re the invisible interface between users and the vast troves of data that power our digital experiences. From e-commerce websites to enterprise platforms, the humble search bar is tasked with delivering precise, relevant results in response to a user’s often vague queries. But how we achieve that precision has evolved dramatically over the years.

In the early days, search functionality was rudimentary, relying heavily on keyword-based search. These systems operated on simple matching principles: if a user’s query contained a word or phrase present in a dataset, the system returned those results. While effective for small, controlled datasets, keyword-based search quickly showed its limitations when faced with the scale and complexity of real-world data.

As data grew in both size and variety, indexing algorithms and full-text search engines emerged. Tools like Elasticsearch and Solr became game-changers, enabling applications to perform complex queries at scale. These systems introduced features like tokenization, stemming, and ranking algorithms to prioritize relevant results. They supported operators, filters, and more sophisticated query logic, but even then, they were often constrained by the user’s ability to phrase queries in a way the system could understand.

The rise of AI has brought about yet another leap in search technology. By leveraging natural language processing (NLP), AI-enhanced search systems began to bridge the gap between human language and machine logic. Semantic search, for instance, allows applications to go beyond mere keyword matching, finding results based on meaning and context. These advancements made search more intuitive but still required significant engineering and fine-tuning to deliver a seamless experience.

But now there’s a change of tides thanks to LLM-Based Agents—a paradigm shift in how we think about search. Instead of relying on pre-defined logic and rigid query structures, these agents harness the power of large language models to understand user intent dynamically. They’re capable of transforming natural language queries into actionable outputs, whether that means crafting complex SQL statements for structured data or performing semantic searches across unstructured datasets. With LLM-based agents, the search bar evolves from a reactive tool into an intelligent assistant, capable of delivering answers that feel more like a conversation than a query.

In this article, I’ll invite you to explore together how these agents work, their technical underpinnings, and the opportunities they present for developers aiming to reimagine search functionality. Let’s get started!

How Search Engines Traditionally Work in Applications

Search engines are designed to retrieve relevant data based on user input, but their methods vary depending on the type of data. For structured data in relational databases, SQL queries are the go-to solution. While they’re powerful for tasks like filtering orders or aggregating sales, they require technical expertise, and users don’t speak SQL. Developers often rely on UI-based filters or simplified search scenarios, which limit flexibility and leave much of SQL’s potential untapped.

For unstructured data like text or media, traditional approaches use keyword matching and inverted indexes to identify relevant content. Tools like Apache Lucene preprocess data into searchable structures, enabling efficient retrieval. However, they fall short when faced with semantically rich queries or multimedia content, often requiring manual descriptions to make the data searchable.

Keyword-based search engines like Elasticsearch extend these methods with indexing and stemming for better results. While effective, these traditional approaches struggle with scalability, complexity, and user intent, paving the way for more advanced solutions like LLM-powered agents.

The Game-Changing Role of LLM-Based Agents

Search functionality has then historically been limited by the need to bridge the gap between human intent and machine-readable queries. LLM-based agents are a transformative solution to this challenge, offering a way to dynamically interpret natural language inputs and convert them into actionable outputs. By leveraging the contextual understanding of large language models, these agents can unlock new possibilities for both structured and unstructured data search.

Defining LLM-Based Agents

An LLM-based agent is a system powered by large language models that perceives, reasons, acts, and learns from feedback within its environment. It interprets user inputs, not just as keywords but as nuanced intent, enabling it to understand even complex or ambiguous queries. In the context of search, this means grasping what a user truly wants, such as finding trends or specific details, without requiring them to provide technical instructions.

Reasoning is where the agent maps user intent to actionable steps. For structured data, this might involve generating SQL queries, while for unstructured data, it could mean performing semantic searches using vector embeddings. Its reasoning allows it to determine the best approach to retrieve accurate and relevant results.

Actions are the execution of these queries or searches, pulling data from the relevant sources. The agent adapts based on feedback, whether from the user directly or indirectly through interactions like query refinements or selected results, continually improving its ability to deliver meaningful outputs.

In search, this capability transforms static keyword-based systems into dynamic, intuitive assistants, enabling users to interact naturally while the agent handles the complexity behind the scenes. Let me tell you how we handled that for some applications.

Structured Data Queries with LLM Agents

One of the most promising applications of LLM-based agents is in querying structured data, such as relational databases. Traditionally, users needed to rely on developers or predefined filters to extract meaningful insights from such data. LLM agents, however, can bridge this gap by dynamically generating SQL queries based on user input.

Imagine a salesperson asking, “What were our total sales last quarter for products over $500?” Instead of requiring the user to know how to write a SQL query, an LLM agent can parse the intent, identify the relevant tables and columns (e.g., sales data, product prices), apply the appropriate filters, and generate code to retrieve that data.

This ability to transform natural language into SQL empowers users to explore data flexibly and interactively while maximizing the utility of structured data.

The agent’s ability to map intent to database schemas, handle column relationships, and account for dynamic filters makes it a game-changer. However, it also introduces challenges in ensuring accuracy, performance, and error-handling when interpreting complex queries or ambiguous user inputs. 

Unstructured Data Search via Semantic Search

LLMs and Vision-Language Models (VLMs) have revolutionized querying unstructured data by introducing a new approach: semantic search. By transforming both the user’s query and the dataset into vector representations in a high-dimensional space, the system identifies matches based on meaning rather than exact words.

For example, imagine a customer support team searching a ticket database for “problems with delayed delivery.” Instead of relying on exact keyword matches like “problems” and “delayed delivery,” a semantic search system using LLM-generated embeddings can locate tickets with similar meanings, even if they use different terms like “late shipments” or “package delays.”

AI plays a dual role in this process:

  • Embedding Generation: Creating contextually rich embeddings for both queries and data.
  • Context Understanding: Interpreting nuances in user queries to align them with relevant data.

This approach is particularly valuable for media-rich datasets, where traditional keyword indexing struggles to capture meaning. Integrating vector search frameworks like FAISS or Pinecone allows developers to efficiently query and retrieve relevant results, greatly enhancing the usability of unstructured data systems.

Embeddings also enable image-to-image search by transforming images into vector representations. For instance, you could identify similar products in a database by simply uploading a picture of one. This capability is, in fact, at the core of the Google Images search engine, so you might be already familiar with it.

Finally, VLMs’ image captioning capabilities open the door to both text-to-image and image-to-text search. Users no longer need to write keywords to describe an image; VLMs can generate captions instead. This enables traditional (or semantically enriched) search to be seamlessly integrated, with VLMs working behind the scenes.

What LLM Agents Unlock for Search is at the Tip of your Toes

The rise of LLM-based agents has transformed search from a static utility into a dynamic, intuitive experience. By removing UX barriers such as clickable filters or predefined search queries, these agents empower anyone to interact naturally, asking questions like, “Show me last month’s sales and top-performing products,” without worrying about how the query is processed. This opens the door to personalized and multi-intent queries that feel more like conversations than technical instructions.

One of the most exciting advantages is the flexibility across data types. LLM agents unify structured and unstructured data under a single query interface, enabling users to retrieve sales numbers from a database while simultaneously searching customer reviews for relevant insights. Additionally, these agents handle dynamic schema changes—they adapt seamlessly to evolving data structures, saving developers from the painstaking task of manual updates.

At Arionkoder, we’ve seen this flexibility firsthand in two groundbreaking projects. In our Caramel experiment, we explored LLM-driven code generation for not just querying structured data (with the data science Python library Pandas, in this case) but enabling users to request calculations and further data processing. What we found in our demos with users was unexpected (well, was it really?): users started with simple queries first, and first signals of success rapidly increased their expectations, starting to push the proof-of-concept (POC) tool to go beyond mere queries, testing its ability to handle increasingly complex logic and workflows. This behavior highlights how LLM agents don’t just respond to user intent—they invite creativity and exploration. And, as promising as this might sound, this pushes it own challenges, too, that we will explore in future articles.

We’re also applying LLM agents to a contract management system (CMS) for a pharmaceutical client. By combining metadata extraction with a search agent, we’ve enabled users to intuitively query contracts—no manual tagging or filtering required. The system captures intent directly from the search bar, offering instant access to relevant documents and insights in a table that is the front door of other AI applications. From there, users can click on relevant contracts and get instant access to a summary of them, a comparison with the original template, potential risks to assess and even the ability to chat with the file, getting immediate answers about their inquiries without having to go through the whole document.


LLM agents accelerate development timelines by auto-generating queries and workflows, reducing the burden on developers while delivering powerful search capabilities to users. These technologies are redefining search, making it more accessible, adaptable, and intelligent than ever before.

Are you ready to transform your search functionality? At Arionkoder, we specialize in building intuitive, AI-powered solutions that unlock the full potential of your data. Let’s reimagine search together! Reach out to us at hello@arionkoder.com