AI in healthcare is moving fast, especially with the rise of LLM-based agents and workflows. Whether your organization has already implemented AI or is just starting to explore its potential, this is the perfect time to validate new ideas through a focused Proof of Concept (PoC).
Why? Because AI’s rapid evolution is unlocking new ways to streamline workflows, automate tedious tasks, and enhance decision-making. But only if those implementations align with real-world needs. A well-structured PoC can quickly reveal whether an AI-driven approach is feasible, valuable, and compliant.
At Arionkoder, we recently put this to the test with OncoRX, a decision-making helper for precision oncology. What started as a concern from our customer, turned into a small Proof of Concept and now we are on the path to a fully implemented AI-driven solution. Here’s how we did it—in just four days.
From Proof of Concept to Production: The OncoRX Case
The Problem: A Bottleneck in AI-Powered Workflows
OncoRX had already integrated an AI-powered component for handling molecular lab reports, but scaling was a challenge. Every new customer required to adjust the system to different report formats, a time-consuming process that involved a lot of implementation work and turned out making adoption slow and inefficient.
We proposed an alternative: instead of having to manually adapt and tailor the existing code to enable processing every report variation, could we use Multimodal Vision Language Models (VLMs) to make the process more flexible and scalable?
The Test: A Four-Day Proof of Concept
The goal was very straightforward—quickly prove whether a new AI-based approach could work within OncoRX’s existing healthcare workflows while maintaining compliance.
Day 1: Testing the Idea
We ran a quick test using ChatGPT with a few anonymized and redacted lab reports to see how well it could handle their variability in format. We shared different documents provided by different labels with ChatGPT, quickly tuning a prompt that explained what exactly we were expecting to obtain from the document, and the structured format in which we wanted it (basically, as a JSON file). And the results were actually very promising. The LLM’s adaptability suggested it could significantly reduce the manual effort required, and speed up the process of adapting to new formats in 10x while ensuring an accuracy similar or superior to the existing approach.
Days 2–4: Ensuring Compliance
In healthcare, AI feasibility is just the first hurdle. Compliance is equally critical. We spent the next three days validating whether we could implement this in accordance with HIPAA regulations by keeping all data secure in a protected environment. Since OncoRX already operated in a HIPAA-compliant AWS environment, we needed a solution that fit seamlessly within its infrastructure.
Instead of relying on external services like ChatGPT, we then experimentally evaluated Llama Vision on AWS Bedrock. With minor challenges such as formatting inputs differently, adapting prompts, and validating infrastructure, we were able to swap our early GPT-based efforts towards a fully controlled, HIPAA-compliant environment, ensuring practically the same performance as in Day 1.
The new module ensured that no Protected Health Information (PHI) left the secure system, allowing AI-powered flexibility without sacrificing compliance.
The Outcome: A Green Light for Full Implementation
The rapid PoC provided a clear path forward:
- Validated feasibility of LLM-based processing for lab reports.
- Confirmed HIPAA compliance by keeping all operations within a protected cloud environment.
- Demonstrated real value, reducing onboarding time for new customers and making the system more adaptable
We are now in the process of making this new system productive, so what started as a four-day experiment quickly became a full-fledged production implementation.
Why This Matters for Healthcare AI Teams
The OncoRX case illustrates why now is the time to run fast, targeted Proof of Concepts. AI capabilities are evolving rapidly, and small, well-designed tests can cut through uncertainty and reveal high-impact opportunities.
As a final recommendation, we leave this simple heuristic:
- Already using AI? A PoC can help you explore new ways to improve efficiency without overhauling your system.
- Still evaluating AI? A low-risk experiment can determine if LLMs (or other forms of AI) can integrate effectively into your workflows.
With the right focus, a few days of testing can pave the way for meaningful AI adoption in healthcare. The key is to move fast, validate real-world feasibility, and ensure compliance from the start.
Would a rapid AI Proof of Concept help your organization unlock new possibilities? Let’s talk.