Case Study: How Salomatic Uses Langtrace & DSPy to Build a Reliable Medical Report Generation System

Karthik Kalyanaraman

Cofounder and CTO

Dec 3, 2024

Introduction: Transforming Healthcare Reporting with LLMs

When Anton and his co-founders launched Salomatic, their mission was clear: make medical reports understandable for everyone. Based in Tashkent, Uzbekistan, Salomatic uses large language models (LLMs) to generate detailed and easy-to-read patient consultations from just a few pages of doctor notes and lab results. In a country where medical data is often presented in a confusing and technical manner, Salomatic’s solution has quickly gained traction.

However, as Salomatic grew, so did their challenges. Generating accurate and reliable reports was becoming increasingly difficult. LLMs, though powerful, are prone to missing key medical data like lab results—a non-starter for healthcare professionals. That's when the team turned to Langtrace, an observability tool that changed the way they monitored and improved their LLM-powered system.

The Problem: Managing Complexity in Medical Data

Salomatic's solution aims to address a critical pain point in healthcare: the lack of clarity in medical reports. Doctors often scribble notes that are incomprehensible to patients, leaving them confused and unsure about their health. Salomatic’s product takes these notes and, using LLMs, generates 20-page patient-friendly consultations that explain diagnoses, treatments, and lifestyle recommendations in a way anyone can understand.

But the process wasn’t without its hurdles. LLMs frequently skipped over entire sections of lab data, leading to incomplete reports. Errors in extracting and structuring data meant Salomatic was spending hours manually correcting reports. "We were manually fixing 40% of the reports, which was unsustainable," Anton recalls.

The Solution: Langtrace’s Game-Changing LLM and DSPy Observability

Enter Langtrace. Integrated into Salomatic’s workflow, Langtrace provided the visibility they needed to diagnose and fix errors in real-time. Langtrace's ability to trace and observe the performance of LLMs allowed the team to see exactly where the system was breaking down.

One significant challenge arose when Salomatic's system struggled to handle certain data extractions, leading to inconsistencies and errors that hindered their ability to generate accurate reports. With Langtrace, the team gained the visibility needed to diagnose the root cause of these issues and quickly implement the necessary fixes. This led to a dramatic improvement in the system's reliability and overall performance.

“We learned more about how DSPy really works in a few hours with Langtrace than in months of trial and error,” Anton explains. This newfound clarity allowed Salomatic to resolve persistent issues, significantly reducing errors and enabling them to increase automation and scale their operations effectively.

The Technical Stack: Building a Reliable System

Salomatic's tech stack includes:

  • Python 3.12: The backbone of their codebase.

  • DSPy: Handles LLM interactions and structured data extraction, crucial for breaking down complex tasks.

  • Pydantic: Used for data modeling and validation.

  • Azure Cloud with Azure OpenAI API Service: Powers their LLMs.

  • FastAPI and SQLAlchemy & Alembic: Manage their backend services and database migrations.

The decision to use DSPy was intentional. LLMs alone couldn’t reliably extract all the necessary data from unstructured doctor notes, often missing critical lab results. DSPy allowed them to break down the extraction process into manageable tasks—first extracting lab panel names, then extracting the lab results for each individual panel. This layered approach drastically improved accuracy.

But DSPy wasn’t foolproof. That’s where Langtrace came in, giving Salomatic the insights needed to fine-tune their DSPy modules. Langtrace’s native support for DSPy made it the perfect fit for their debugging needs, ensuring that their LLMs delivered high-quality, structured reports.

Impact: Scaling with Confidence

Since integrating Langtrace, Salomatic has seen tangible improvements:

  • Significant reduction in report errors: Langtrace helped the team identify and resolve key issues that previously required extensive manual corrections, allowing them to significantly decrease the amount of manual work needed for each report.

  • Increased automation: With errors minimized, Salomatic has been able to automate more of their report generation process, saving time and resources.

  • Improved reliability: The system is now capable of generating 10 detailed reports per day, with plans to scale up to 500 reports daily as the technology continues to improve.

Salomatic's goal is to become the go-to solution for clinics in Uzbekistan and beyond. By fine-tuning their system with Langtrace, they are well on their way to achieving this.

Key Takeaways: Why Langtrace Stood Out

  • Native DSPy Support: Langtrace’s seamless integration with DSPy made it easier to diagnose and fix issues quickly.

  • Improved Understanding: Anton and his team learned more about their LLM system in a few hours with Langtrace than they had in months of trying to solve problems manually.

  • Better Product Quality: Langtrace helped Salomatic improve their report accuracy and reliability, reducing the number of complaints from their clinic partners and allowing them to scale their operations confidently.

Final Thoughts

“If LLMs are the brains of our solution, than DSPy is our hands, and Langtrace is our eyes,” Anton says. With Langtrace, Salomatic has not only been able to fix critical bugs but also gain a deeper understanding of how their system works. This has been instrumental in helping them scale their operations while maintaining the high standards required in the medical field.

If you’re a hospital looking for a better way to communicate lab results or provide clear patient consultations, don’t hesitate to reach out to Salomatic!

We’d love to hear your thoughts! Join our community on Discord or reach out at support@langtrace.ai to share your experiences, insights, and suggestions. Together, we can continue advancing observability in LLM development and beyond.

Happy tracing!

Ready to try Langtrace?

Try out the Langtrace SDK with just 2 lines of code.

Ready to deploy?

Try out the Langtrace SDK with just 2 lines of code.

Want to learn more?

Check out our documentation to learn more about how langtrace works

Join the Community

Check out our Discord community to ask questions and meet customers