Transforming Radiology Reporting

Transforming Radiology
Reporting

TRANSFORMING RADIOLOGY REPORTING:
AI-Driven Automation with Cloud-Based RIS/PACS Integration
Category: AI-Based Radiology Reporting
Duration: 50m 39s
Speakers:
Moderated by: Brian Casey, Managing Editor, Imaging Wire
Imaging Wire Webinar
Description:
View the The Imaging Wire Webinar, “Transforming Radiology Reporting: AI-Driven Automation with Cloud-Based RIS/PACS Integration,” where industry leaders Jonathan Luchs, MD FACR (Chief Medical Officer, Premier Radiology Services), Avez Rizvi, MD, DABR (Founder & CEO, RADPAIR), and Vijay Ramanathan (CEO & Co-Founder, RamSoft), describe how AI-driven automation and cloud-based PACS integration are revolutionizing radiology efficiency. Moderated by Brian Casey from The Imaging Wire,  the speakers discuss the criteria used to access (1) AI tools in general and (2) how AI fits within a cloud-based PACS/workflow environment, as well as how cloud + automation + PACS working together can maximize value, workflow, and time-savings for a radiology and teleradiology practice.
Transcript:

Brian Casey,Dr. J.S. Luchs, Dr. A Rizvi and Vijay

Brian Casey: Hello and welcome to Transforming Radiology Reporting: AI-Driven Automation with Cloud-Based RIS PACS Integration and Imaging Wire Webinar in partnership with RamSoft. My name is Brian Casey, and I am the Managing Editor of The Imaging Wire.

We have a great program for you today. We’ll begin with a panel discussion on new technologies for radiology reporting. Following the panel discussion, we’ll open it up for questions from our audience during the second half of the webinar.

Speakers:

  • Dr. J.S. Luchs, Chief Medical Officer at Premier Radiology Services.
  • Vijay Ramathan, CEO and Co-Founder of RamSoft.
  • Dr. A. Rizvi, Founder and CEO of Rad Pair, and a practicing tele-radiologist reading for Premier Radiology Services.

Gentlemen, thanks for being with us today.

Dr. J.S. Luchs: Hey, Brian, thanks for having me.

Vijay Ramathan: Thanks, Brian.

Dr. A. Rizvi: Thanks, Brian.

Brian Casey: So, let's get started. Dr. Luchs, you're the Chief Medical Officer for Premier, which is an organization with 120 radiologists who read over 10,000 studies a day. What are some of the challenges that you see right now with radiology reporting?

Dr. J.S. Luchs: With radiology reporting, probably the biggest challenge we face began when we transitioned to voice recognition. Previously, we relied on transcriptionists who would take what we said—either on tape or through dictation—and it was quick and efficient. This allowed us to dedicate all our time to analyzing images and being radiologists, which is what we love to do. The shift to voice recognition was interesting; I’m old enough to have been there when it happened. Initially, it was horrible, though it has certainly improved over time. However, challenges persist because radiologists now have to focus heavily on reports, which is not what we’re trained for or particularly skilled at. We're experts in anatomy and identifying pathology, not transcription. Yet, we often find ourselves acting as transcriptionists, which takes up nearly 50% of our time.

This creates reluctance among radiologists to engage with platforms where we essentially have to type or constantly review and edit reports. Even with some transcription support, there’s still a lag in turnaround time. The combination of voice recognition demands and broader issues like radiologist shortages, difficulty in hiring, and burnout compounds these challenges. Burnout is a significant problem—not only because of the increasing workload but also due to the mental exhaustion of meticulously editing reports, checking for errors like whether "left" or "right" was mentioned correctly, or even worrying about punctuation. It’s exhausting, truly. If we could eliminate much of this report-focused workload and instead concentrate on our core expertise as radiologists, we could read more studies, experience less exhaustion, and significantly reduce burnout. That’s the overarching challenge we’re facing—enabling radiologists to do what they do best.

Brian Casey: What impact do a lot of these challenges have on Radiologist workflow?

Dr. J.S. Luchs: A lot, and again, it's between burnout and just between, you know, wanting to do more studies. It will slow you down. You know, when you have to go back and start looking at your report, which all of us have to now, look a tyour report, read through your report in detail.

Number one, you're gonna try to make shorter reports and less detail, which is really not good for the patient. Number two, you're gonna slow down and you're gonna read less studies per day. And three, you're gonna get tired by the end of the day. So if you weren't doing that, you could read more because you're less tired.

So, you know, by the end of the day, we're having to take breaks. It really, the big thing is turnaround time and getting the patient studies done so the proper care can be given. And reading less per day for the same radiologist will affect the patient.

Brian Casey: So, Vijay, you've been running a cloud-based RIS PACS company for 30 years. How do you think cloud technology, automation, and PACS can work together to maximize value and timesaving for radiologists from a reporting perspective?

Vijay Ramanathan: Brian, we're super excited to deliver solutions that enhance radiologist productivity, address the shortage, manage increasing volumes, and improve the quality of reports for ordering physicians. When we started, teleradiology focused on providing quick opinions on emergency CT scans and delivering results to ER physicians swiftly. We've come a long way since those early systems. In the 2000s, we were among the first to offer web-based teleradiology software to radiologists, and we embraced cloud-based solutions as soon as they matured a decade ago. This is the next evolution of teleradiology and radiology reporting.

We’re now combining the benefits of transcriptionists, who historically assisted radiologists in creating reports, with cloud-based solutions that provide single sign-on, unified worklists, and automated workflows. This creates an environment where everything a radiologist needs—images, reports, currents, priors, documents, symptoms, history—is available in one place on a single desktop, integrated with an AI-powered reporting solution. This approach greatly enhances productivity and efficiency.

Cloud technology offers tremendous benefits on the security side, but more importantly, it provides unlimited storage and scalability. With a cloud-based solution, we can easily onboard new imaging centers or hospitals, incorporate their images and priors, and eliminate constraints like hardware, storage, or network limitations. The infrastructure today allows us to deliver images to radiologists anywhere, without bandwidth restrictions tied to urban locations. This capability enables us to automate clinical workflows and efficiently deliver results to referring practices, regardless of location.

Brian Casey: Right? There are so many benefits to the cloud when it comes to medical imaging management. Now, Dr. Rizvi, you are the founder of Rad Pair, and we'll talk more about Rad Pair in a moment, but you're also a practicing radiologist. Can you share your perspective on how AI-based reporting can be part of the solution to all the challenges we’ve been hearing about today with radiology reporting?

Dr. A. Rizvi: Yeah, absolutely, Brian. So, you know, I started Rad Pair for that. That was the entire purpose, because just like Dr. Luchs and Vijay have mentioned, there's this back-and-forth that happens when you're reading cases, which contributes significantly to burnout. You're looking at the images while simultaneously trying to formulate the words for a very voice-driven dictation system. You're saying all the grammar, like "period" and "new line," and often clicking into form fields if you're doing a structured report. As your mind is split between these two tasks, as Dr. Luchs described, doing this thousands of times a day across a hundred or more cases for eight to ten hours leads to that end-of-day burnout. So, I started thinking: how do we stop this back-and-forth, this constant context switching? Generative AI made a lot of sense to me because it functions like a very intelligent resident or scribe—someone who listens to you in real time and turns around reports quickly, similar to what Dr. Luchs described with a transcriptionist. The idea was to bring this functionality to the forefront with AI and do it efficiently, and that was the genesis of this reporting system.

Brian Casey:  That sounds like a great idea. So Dr. Rizvi, I understand that you've got a short video of what's possible today with AI based reporting that's integrated with cloud-based PACS. So can you queue that up for us?

Dr. A. Rizvi: Yeah, sure.

Video 1: Rad Pair Short Video
Hey everyone, I'm gonna go over our integration with Rad Pair and RamSoft's PowerServer. As you can see on the left-hand side, we have a case open in the imaging window, a chest X-ray, and on the right-hand side of PowerServer, we have this seamless integration of Rad Pair.

So just to point out a few things, clinical history is already passed into the report, and this includes stuff like clinical history and the views for this case, which is just a chest X-ray. However, the key point here is it's not just a mapping that we are doing here, one-to-one; there's an AI layer here too to ensure that the information that's being sent over is actually relevant for two things: one, for reporting as a radiologist, and two, for billing purposes. So it's not just a one-to-one mapping. There's another AI layer there.

So with that, I'm just gonna quickly go ahead and dictate this case:

"Okay, so there are prominent bilateral interstitial markings compatible with congestive changes, and there's a small left pleural effusion versus basilar atelectasis. Right-sided IJ catheter is present within the SVC, endotracheal tube is 3.9 centimeters above the rinna, and a feeding tube is present overlying the stomach in an appropriate position."

So with that, I'm gonna go ahead and process this report. Now keep in mind, I didn't dictate this and click into fields or say "new line" or "period" or any of that grammar stuff. The report's already back. It only takes a couple of seconds.

You can see everything that I said is in the appropriate position under each section that it's supposed to be in and has created a nice impression for you. At this point, you could pretty much sign the case and go to the next case.

That is the power of full generative AI reporting. You just talk to it, it does the work for you, you sign the case, and move on to the next one. It improves your efficiency, improves quality, and we are so happy to be partnered with RamSoft's PowerServer on this. It's been an incredible journey, and I hope you enjoy it as well. Thanks.

Video 2: Pair Insights
Hey guys, we're gonna go over Pair Insights, which is our Radiopedia integration that allows radiologists to speak naturally, and the guidelines and classifications are automatically inserted. It actually uses AI to go through Radiopedia’s knowledge base and bring back the appropriate classification or guideline.

So if you look on the screen right now, on the left-hand side, there's a pulmonary nodule case, and Rad Pair is seamlessly integrated into a RamSoft PowerServer. We're gonna quickly dictate this case and show you what it means:

"There is a one-centimeter pulmonary nodule noted within the right lower lobe, insert the guideline."

So I'll pause there and let you know what's going on. Pair Insights is actually going to Radiopedia, finding the appropriate guideline, and bringing it back. Now, if this was a case that you were reading, obviously you wouldn't stop there; you would continue reading the case. But I've paused here just to show you what's going on.

When it comes back, as it usually takes about 10 to 20 seconds to do, it will put in the appropriate guidelines. So in this case, it's using Fleischner Society and has put in the appropriate guideline. Now, if you wanted to check this further, you could actually read the rationale associated with this and look for what's the reason why it pickedit. You could even go a step further and actually go to the actual reference article for Radiopedia.

And then once you're done, you can accept this, and it becomes part of your transcript. Once you process this, what's gonna happen is it's gonna end up being part of your report.

So in a few seconds here, the CT will comeback, it'll be under the lung section, it's already back, it's already highlighted everything that's done. And obviously, this is the only thing we said; we didn't say anything else. And so there's a nice impression for you at the end as well.

So that in a nutshell is Pair Insights powered by Radiopedia, fully integrated into Rad Pair and RamSoft's PowerServer. Thank you very much.

 

Brian Casey: Alright, that was really interesting. Thanks. Thanks for that, Dr. Rizvi. So, uh, Vijay, can you summarize the benefits of this integration that we're hearing about between AI-based reporting and cloud-based PACS?

Vijay Ramanathan: Sure, thanks, Brian. It's really the ease of integration by embedding the AI report generation solution into the PACS solution that serves as the real time saver. It's not about having an AI reporting solution separate from a PACS system; the entire solution is packaged together. Everything is built in—there's no need to mention punctuation or worry about the sequence of dictated elements. The impression can be generated automatically from the report.

This setup saves a significant amount of time by enabling radiologists to work from anywhere with a cloud-based reporting solution. There's no need for dedicated hardware or any proprietary equipment—just high-quality monitors and a workstation to run the RamSoft solution, which has the AI reporting solution from Rad Pair embedded.

The best part of a cloud-based solution is that it continues to improve daily. For instance, Pair Insights no win corporates the most accurate and latest radiology information, such as tumor stage classifications and BI-RADS classifications.

And here's the key: we're still at the very in fancy of AI. As we look ahead, this technology will only get better, delivering increasingly advanced solutions for radiologists every single month.

Brian Casey: That's great. So, Dr. Luchs, as a CMO who needs to make decisions about adopting technologies at Premier, can you share the criteria that you use to assess AI tools? How do you decide what kind of AI to implement?

Dr. Luchs: Sure. So, listen, everything comes down to the first and most important thing, which is cost. Of course, cost is important. But taking that aside for a second, the accuracy of the AI is exceptionally important.

When you're dealing with imaging AI, an accuracy of, let's say, 85%, might be pretty good. However, when you're dealing with voice-type AI, 85% is horrible. For voice recognition and AI, we would really want it to be perfect. Voice recognition is incredibly important, exactly as Dr. Rizvi was saying. If you have perfect voice recognition, or at least if the program is placing things where they’re supposed to go within the report, that’s what saves you a lot of time and prevents burnout. It ensures you’re not exhausted by the end of the day.

When that happens, you’re increasing efficiency. So, can the AI increase the efficiency of the radiologist? Can it increase throughput and turnaround time, and the number of cases we’re doing per day while maintaining the quality we’re looking for?

Another factor is integration. Is the AI integrated into the PACS, or is it a separate type of system? When you have a one-platform system, it’s much easier to load, train, and use—not just for the radiologist but also for the operational team. An integrated system makes it significantly easier to get up and running because there’s always a learning curve when you’re dealing with any team, whether it’s the radiologist or the operational aspect of the team.

If you have a system where the AI and PACS are well integrated and communicate effectively, it will significantly decrease that learning curve and allow the team to implement it successfully and make it work efficiently.

Brian Casey: So would you say that's maybe the most important consideration when you're integrating AI reporting with PACS—the ease of this integration?

 

Dr. Luchs: Yeah, exactly. Do they like talking to each other is really the most important thing, 'cause sometimes programs don't like that. And although you think that you just bought this whole system and it's gonna be up and running in two months or a month, six months later, you still don't have everybody on it. So, yeah, that makes sense.

Brian Casey: Dr. Rizvi, I wanted to talk to you a little bit about something that I've been hearing about in the field a little bit. And this is something called agentic-based AI. Can you talk about what agentic-based AI is and maybe what it might mean for radiology?

 

Dr. A. Rizvi: Sure. So, you know, if we take a little bit of a step back and think about the paradigm shifts we've seen in radiology over the last several decades, we went from essentially what was film-based radiology to a digital paradigm. With that came dictation, and I feel like that's where we've been for the last several decades.

If you think about it from a radiologist's perspective, what you see is pretty much the same UI and UX, right? You have a text box, you dictate into it, and then, at some point, you have a report, and you sign it. That's sort of been the way it’s been for, like I said, a few decades.

Where we're headed now is an actual assistant and co-pilot that communicates with you and performs tasks for you. For example, if I'm looking at a case and I’m reading it, saying, "Okay, there’s a small right pleural effusion," and I’m observing all these things, the agentic AI responds, “Yep, I got that. I’ve put that in the findings.” Then I say, “Okay, can you also add that to the impression?” and it does it for me while informing me it’s doing so.

In this case, I don’t have to take my eyes off the images. Normally, if something isn’t actively communicating with me, I still have to look back at some point to confirm whether it went into the right place. But if something is communicating with me and I can trust it, I don’t need to do that back and forth. The only thing I have to do is, once I’ve finalized my diagnosis, look back one last time to ensure everything is accurate, tell it to sign the report, and it will do so.

That’s the future. It’s a completely different paradigm from what we’ve been doing in the past. We’ll be show casingthis at RSNA, and this is not a gimmick. This is a real product. You’ll be able to use it at our booths at RamSoft and Rad Pair.

Video Demonstration:

"Wingman here, ready for action."

"Hey, wingman, I need you to combine my two templates: the OB greater than 14 weeks and the ultrasound biophysical profile together."

"Roger that, the templates have been combined."

"Please add to the findings thatthere’s a 25% pneumothorax noted within the left lung apex, and also put thatin the impression."

"Copy that, finding added to the report and impression."

"Okay, I think we’re good to go here. Let’s go ahead and sign this report."

"Roger that, report signed. Over and out."

 

Brian Casey: Alright, cool. I can't wait to see that at RSNA. So, Dr. Rizvi, you've got a little bit of a challenge for our audience when it comes to reporting. Can you describe what that is?

Dr. A. Rizvi: Yeah, so my challenge to all the radiologists out there is this: I want you to turn on your speech mic and start talking to it, like I’m talking to you. Tell it about the football game. Talk about what happened with the election—well, maybe not that. But use normal language and see what happens.

What you’ll notice is that with legacy dictation reporting systems, you’re just going to get a bunch of gibberish. The reason why is that those systems are overfitted to radiology lexicon.

Why does that matter? The future of agentic AI-based reporting requires that the dictation—the input of speech—be something that understands natural language but can also switch seamlessly between natural language and radiology lexicon. If it can’t do that, it can’t perform a lot of the work we need from agentic AI.

This is where I think what we’ve created at Rad Pair with Speech Engine 2.0 stands out. It’s designed to handle all of those challenges and actually enables agentic AI to provide meaningful solutions. Looking forward to showing everyone at RSNA.

Brian Casey: Alright, so everybody, you can try that at home, right?

Dr. A. Rizvi: Yes, absolutely.

Brian Casey: Before we go to questions, I'd like to give Dr. Luchs a chance for some concluding thoughts. As a reminder to our audience, you can ask questions using the Q&A button at the bottom of your screen, and we will try to get to all your questions when we open that up in just a couple of minutes. Dr. Luchs, any concluding thoughts?

Dr. Luchs: Yeah, thanks, Brian. So, you know, Dr. Rizvi said something that is really interesting—this concept of agentic-based AI and how it’s something new. It really is.

But the most amazing thing, again, is that I’m old enough to say I used to use a Dictaphone. When I used the Dictaphone, I was talking to a person. Yes, it was being taped, or I was talking to someone, and I would say exactly what Dr. Rizvi just described:

“There’s a pleural effusion, there’s apneumothorax, there’s a rib fracture. Oh, there’s an infiltrate in the right lower lobe. Make sure you also put that in the impression. And, by the way, the last report that I just did two reports ago—can you also include that?”

All of that was part of a conversation you were having with a person. Now, fast forward to voice recognition. For those around my age who’ve used voice recognition, you had to rely on the fact that if you learned typing in high school, it really helped you.

Then we moved to a point where voice recognition got better—yes, it did—but it remained disorganized. Now, with agentic-based AI, it feels like we’ve come full circle.

We’re now using this agentic AI as the transcriptionist we’re having a discussion with. It’s as if we’re back to simply talking about what we see: describing the images, talking as though we’re speaking to a referring doctor.

And that’s exactly how we’re trained—that’s why we do this and why we love it. It’s phenomenal that we’re getting to this point. I’m so excited to start using agentic-based AI once it’s out.

I think it’s going to dramatically change the way radiology is practiced and take us back to the point where radiologist scan truly focus on being radiologists.

Brian Casey: Yeah, it's definitely gonna be an exciting future. Some great concluding thoughts. We're now ready to start taking questions. Give us one second to get set up and we will be right back. Remember that you can use the Q&A button at the bottom of your screen to ask any questions, and we will be right back.

 

We are back. We do have some questions coming in already. Dr. Rizvi, I'd, I'd like to direct this first question to you. As a radiologist, can I use a combination of AI generative reporting and my favorite historical templates with what you showed?

 

Dr. A. Rizvi: Yes, a hundred percent. We’ve built a system that includes all the standardized templates already built into it, but you’re not limited to using only our templates. You can definitely use your own, and there are very easy methods to add those templates.

You can do this at the user level, or you can use the backend administration panel, which allows administrators to customize and add templates for individual radiologists.

The way we think about templates in this new future is as a guide for the AI—essentially instructions for where to place things. That’s how we define templates at this point.

Brian Casey: So we have a question coming in from Trent:

What if the patient is part of a lung screening program and you want Lung-RADS guidelines versus Fleischner? Can this product recognize the patient as a lung cancer screening patient and suggest guidelines based on the patient’s history in the EMR PACS?

Dr. A. Rizvi: Yes, the answer is yes, it can. We have all those guidelines built into the system. Whether it’s Lung-RADS, BI-RADS, or TI-RADS—all the ACR guidelines are included.

You can choose which guideline you want to use, and if it’s a screening case and it’s mentioned as such, our AI algorithms will determine that and automatically pick the right classification and guideline.

 

Brian Casey: Alright, perfect. Vijay, I’d like to direct this next question to you.

What are some improvements that PACS administrators and IT staff will gain with an embedded report generator in PowerServer?

 

Vijay Ramanathan: Sure, so there are a few key improvements.

One of the main advantages is that it becomes a lot easier to maintain templates with AI reporting because youa ctually need fewer templates. Historically, templates were required to cover each and every case.

With AI reporting, the AI is intelligent enough to handle most cases with just one template per study type, without the need for creating templates for every slight variation. For example, a normal study with slight variations used to require its own template or macro. Now, the AI’s intelligence eliminates the need for such granular customization.

 

This is a significant aid for PACS administrators and medical directors, as fewer templates mean much simpler management. Managing a vast number of templates can become a major ordeal, and this improvement streamlines the process significantly.

I think Dr. Luchs might want to comment on this as well, since managing a huge number of templates can indeed be quite challenging.

Dr. J.S. Luchs: Yeah, no, I completely agree. I mean, you know, everybody wants their own templates. They always want their own templates exactly how they want them, and it can make it much more difficult for the IT team to deal with it.

And if you don’t have to deal with that, or if something’s already embedded, or if you’re dealing with an AI that’s going to wind up creating your templates for you, it’s going to make it so much easier on the team.

Brian Casey: Yeah. Dr. Luchs, can you talk a little bit about turnaround times? You spoke earlier about burnout and efficiency, and that kind of thing when it comes to turnaround times, which is an actual metric that radiologists are measured by. What advantages does a combined PACS and reporting solution have in terms of turnaround? Does it let you get reports out faster?

Dr. J.S. Luchs: Yeah, when something is embedded, and they actually—as I was saying before—they talk to each other and they’re happy talking to each other, it makes it much easier for the radiologist.

If things are loading up quicker, if the system loads up immediately and you don’t have to wait for another program to open up or start up after you click something, it’s going to affect your turn around time. Turnaround time shouldn’t be affected by what the computers are doing. It should be affected by you as the radiologist and how long your report’s going to be, and how long it takes you to read it.

Having two systems embedded together that actually deal with each other very well really does help the turnaround time.

The other thing, too, is it also helps for integration because if you have one system and you want to buy another system, you’re going to have to make sure that those things are integrated together to some degree, at least that they’re talking to each other. That’s also going to slow down the transition of starting an AI-type voice recognition. So it helps on both ends.

Brian Casey: Very good. We’ve got a question in from Christian in the audience. Dr. Rizvi, can I train the AI with any particular statements radiologists use inside their reports?

Dr. A. Rizvi: Yeah, so maybe they’re referring to macros or other things that you might say. And yes, all the features that are typically used by radiologists, like quick phrases that you say—like macro this, macro that—all of those can still be used in the transcript.

You can create those macros in our system. You can upload them from other various systems. So they could all be used within our system.

Brian Casey: Okay. Another follow-up question from Carl. Can DICOM SR (which I believe is DICOM structured reporting) data be automatically integrated into the radiology report, especially in specified areas in the report?

Dr. A. Rizvi: Yeah, this is probably one of the coolest things that's coming, which I'll give you a little sneak peek on. If you have structured SR data that’s coming through, the cool part about generative AI is you don’t need to map those into individual fields. If you’ve trained the models well enough, and we have at Rad Pair, they can just take them as is and put them into any template as long as there’s a section for it.

For example, if you have a section called kidney sand information from SR is coming for buds of the left or right kidney, it’s just going to get it put in the right place. You don’t even need to do a significant portion of mapping.

Now, where the really cool stuff comes in is when we go multimodal and start looking at the images and the actual summary image data from ultrasounds and pull that in automatically.

This is what we call report caching. Just like the concept of image caching, where the images are already loaded when you open a case, report caching preloads all your data elements—whether they’re multimodal image classifiers, SR data, or metadata from the system.

When you open that case, it’s all there for you. At that point, if you want to add something, you can, but if the case is already done, you just hit sign, and you’re finished. This saves a tremendous amount of time.

Brian Casey: Yeah, for sure. I got a question here from Luke. Can the integrated RamSoft Rad Pair solution determine the relevant priors a patient has in the RamSoft Patient Explorer and put the dates and descriptions of those priors into the comparison section of the report? If not, is that level of integration on the roadmap?

Vijay Ramanathan: Yes, this is absolutely the next phase of the integration that is in progress—to be able to capture the prior data automatically so that it doesn’t need to be dictated. Conventionally, a radiologist would dictate information from prior exams that they actually look at, but all of that could be done automatically.

This is absolutely in progress.

Brian Casey: Here’s an interesting question. Over the past year, we’ve had two instances of complete internet connectivity failures for several hours, making it impossible to run radiology applications. Is there a capability to still utilize your application when this happens, such as during a disaster or cyberattack?

Vijay Ramanathan: Normally, what we recommend in terms of internet connectivity is to have a backup solution and a downtime procedure in place. Today, the most appropriate backup solution is cellular connectivity.

For instance, most of us have phones with 5Gconnectivity, which is more than sufficient for backup scenarios when primary internet connectivity goes down.

In terms of ensuring radiologists can continue to work during an internet outage, having a cloud-based solution is very helpful. Cloud-based solutions continue to run even if there is disruption at one particular site.

If connectivity is lost at one location, radiologists can work remotely from another location or from home. This flexibility is one of the big advantages of cloud-based systems.

Dr. J.S. Luchs: Yeah, it’s similar to what Vijay said. We’ve been using RamSoft for over a decade, and having a cloud-based PACS makes it much easier to stay operational during a sudden power surge or internet outage.

For example, you can quickly switch to a mobile hotspot and continue working without having to restart your PACS system or other equipment.

Cybersecurity is also extremely important, and we rely heavily on RamSoft’s robust security measures. It’s been phenomenal.

Brian Casey: Very good. Dr. Rizvi, are there ways to modify the template, like including a clinical correlation section? Also, if there are sections left empty, does the interpreting radiologist input some verbiage, or can it be finalized?

Dr. A. Rizvi: The templates are completely modifiable. You can add sections or make any changes you want.

If a section is completely blank, you’ll need to tell the system what to put in those areas. However, if there are existing normalized text phrases in those sections, our system is smart enough to recognize contradictions and replace outdated information with the updated content you provide.

Brian Casey: Question from Paolo: Is it possible to use this tool integrated with PowerScribe?

Dr. A. Rizvi: Rad Pair is a full reporting solution on its own. It includes all the capabilities of legacy reporting systems, plus full generative AI reporting, classifications, and the agentic AI features we’ve discussed.

We see it more as a replacement for older legacy reporting systems rather than just an addition.

Brian Casey: Question from Brad: With large language models (LLMs) becoming more popular, how does Rad Pair differentiate itself from other solutions using LLMs? Is it on speed, accuracy, integration, or all of the above?

Dr. A. Rizvi: We look at ourselves as revolutionizing the reporting paradigm, not just making incremental improvements.

Historically, reporting systems have made small, incremental changes over the decades, but the UI and UX have stayed largely the same—text boxes, dictated reports, and manual edits. Even companies using LLM snow are primarily focused on minor improvements to the existing workflow.

At Rad Pair, we’re taking a different approach. We aim to change the paradigm entirely by creating a natural communication experience. The goal is for Rad Pair to disappear into the background so that radiologists spend more time focusing on images and less time on the reporting system.

That’s where we’re headed, and that’s how we’re differentiating ourselves.

Brian Casey: That was the Wingman feature, correct?

Dr. A. Rizvi: Yes, exactly. Wingman is the future of radiology reporting—natural, seamless communication.

Brian Casey: How far out are we from seeing something like Wingman?

Dr. A. Rizvi: You’ll be able to see and use it at RSNA. It’s not just a concept or a gimmick—it’s a fully functional product. As soon as RSNA is over, you can purchase it.

Brian Casey: That’s amazing. So, Dr. Rizvi, it seems like, initially, AI in radiology focused mostly on analyzing images and highlighting suspicious areas. But recently, a lot of attention has shifted to reporting solutions. Why do you think that is?

Dr. A. Rizvi: It’s because radiologists spend so much of their time on reporting, which is large lynon-value-added work.

The value of a radiologist is in analyzing images and diagnosing conditions, not editing reports. However, the process of creating the report is necessary and can’t be avoided.

Pixel AI, or image classifiers, is helpful and speeds up diagnosis by pointing out areas of interest. But removing the burden of creating reports manually has an even greater impact on a radiologist’s daily workflow. Solutions like ours aim to eliminate this non-value-added task entirely.

Brian Casey: Dr. Luchs, do you see AI’s real value as more in helping with reporting rather than analyzing pixels?

Dr. J.S. Luchs: I think it’s a bit of both.

AI programs that assist in identifying findings on images can speed things up slightly, but you’re still analyzing the images yourself. Generative AI for reporting, however, takes away the non-medical part of the job.

It allows radiologists to practice radiology and focus on the clinical aspects of their work rather than the administrative aspects. It speeds up the process, improves accuracy, and lets us provide better patient care.

Brian Casey: If radiologists save time on reporting, where do you think that time will go?

Dr. J.S. Luchs: It will likely go toward the things we already do—reading more imaging studies per day, analyzing studies in greater detail, or speaking more with referring doctors.

Ultimately, it will shift from secretarial tasks to more patient-focused activities, which is exactly where it should go.

Brian Casey: Vijay, how about you? When it comes to AI for pixel analysis versus reporting, which do you think is more important for radiologists?

Vijay Ramanathan: I don’t think it’s about choosing one or the other—we need both.

Pixel AI helps radiologists diagnose conditions that might not be obvious from the images alone. Reporting AI, on the other hand, speeds up the creation of reports.

These solutions address different challenges in the field, including the shortage of radiologists and the need for faster turnaround times. Together, they enhance productivity and allow radiologists to focus on what truly matters—patient care.

Brian Casey: That makes a lot of sense. We’re starting to wrap up, but we have a couple more technical questions about Rad Pair.

Dr. Rizvi, does Rad Pair require FDA approval or clearance for use in teleradiology?

Dr. A. Rizvi: No, it does not. While there is a lot of discussion about potential regulations for LLMs, Rad Pair currently does not require FDA approval.

Brian Casey: Are there any studies quantifying improvements in turnaround times or report output metrics?

Dr. A. Rizvi: It’s still early days, but we are working with a university on a white paper. We might have some preliminary data ready for RSNA.

Brian Casey: Any plans to expand Rad Pair to the UK, where radiographers read images?

Dr. A. Rizvi: Yes, we have international channel partners. If you’re attending RSNA, stop by our booth to discuss potential partnerships.

Brian Casey: That’s great to hear. So, as a reminder, RSNA is coming up in just a couple of weeks. Dr. Rizvi, you mentioned where people can see Rad Pair at RSNA. Can you remind us of your booth location?

Dr. A. Rizvi: Sure! We’ll be in the AI section at Booth 4918. Stop by to see RadPair in action and chat with us.

Brian Casey: Great. Dr. Luchs, will Premier Radiology Services also have a booth at RSNA?

Dr. J.S. Luchs: Yes, we’ll be at Recruiters Row, South Hall Booth 1139. Looking forward to seeing everyone there.

Brian Casey:Vijay, where can attendees find RamSoft at RSNA?

Vijay Ramanathan: We’ll be in the North Hall at Booth 6513. We’d love to meet everyone and discuss our solutions.

Brian Casey: Perfect. Dr. Rizvi, I understand you have some exciting news to share with us before we wrap up?

Dr. A. Rizvi: Yes, we just found out that Rad Pair has won the Best New Vendor of the Year Award for 2024 at Aunt Minnie. We’re incredibly proud of our team and grateful to our partners like RamSoft and Premier Radiology Services for making this possible.

Brian Casey: Congratulations! That’s an impressive achievement and well-deserved recognition for the work you’re doing.

Dr. J.S. Luchs: Congratulations, Dr. Rizvi. Well done.

Vijay Ramanathan: That’s amazing—congratulations to you and your team!

Brian Casey: I’d like to thank Dr. Jonathan Luchs of Premier Radiology Services, Vijay Ramanathan of RamSoft, and Dr. Rizvi of Rad Pair for this fascinating discussion.

Dr. J.S. Luchs: Thanks for having us.

Vijay Ramanathan: Thank you.

Dr. A. Rizvi: Thanks so much for the opportunity.

Brian Casey:And a big thank you to our attendees for joining us today. This was the firstImaging Wire Webinar, and we’re thrilled with the turnout. Be sure to stop byRSNA to see all the great technology we’ve talked about today.

Signing off for the Imaging Wire, I’m BrianCasey.

Faster, more patient-centric imaging is closer than you think.

Let us show you! Connect with us today to schedule a demo.