Revolutionizing Breast Cancer Detection with AI: A Conversation with iSono Health’s Founder and CTO Shadi Saberi

Sociologist and Marketer at Arionkoder with a focus on creating compelling and informative content that helps our clients make great decisions and accomplish their business goals.

iSono Health has developed the world’s first AI-driven, portable, and automated 3D breast ultrasound scanner. Through our Reshape Health Grants, we’re supporting them in enhancing their AI capabilities—improving the clarity and reliability of their AI outputs, enabling users to act with confidence, and optimizing their deep learning model for even greater lesion detection accuracy.

We sat down with Cofounder and CTO Shadi Saberi to discuss iSono Health’s mission, the impact of AI on breast cancer detection, and how this grant is helping them advance life-saving technology for millions of women worldwide.


Q: Welcome Shadi! Tell us a little bit about yourself.

A: I’m cofounder and CTO of iSONO Health. I’m an electrical engineer by training. I got my PhD from Carnegie Mellon in Pittsburgh. And I started iSono Health because of a personal experience of losing a loved one to breast cancer.

Q: What’s your company’s mission and vision, and what inspired you to create it?

A: Our mission is to make sure no woman ever dies from breast cancer through early detection. So after losing a loved one I realized that breast cancer can be missed in the standard of care screening, which is mammogram. And my mom is actually a radiologist who practices outside the US. Through her, I understood half of women have dense breasts, and a mammogram is not sufficient for them. They need supplemental imaging, and ultrasound could be a good alternative. As an engineer and knowing ultrasound is safe, it can be used for pregnant women and babies. I always wondered why it’s not used for whole breast imaging and breast cancer screening, what are the challenges? And, digging into it, I realized one of the biggest problems is that whole breast ultrasound requires a very specialized skill of a breast sonographer or a physician, and it’s also very time consuming. It can create, it ultrasound is very sensitive as a modality. It sees a lot of things, but it’s not as specific, so it can create more false positives. And in a lot of places, also, there’s shortage of people who can do the breast ultrasound.

So it is only used after an abnormality is detected in mammogram. But if you have done SPARS, a mammogram cannot see through the tissue very well. The cancer can get missed, and a woman can get diagnosed with late stage breast cancer while their mammogram could have been clear a couple months earlier. So this really motivated me to find out how we can solve for these issues that exist with creative engineering solutions. One of them, how can we automate things so that we don’t really need a skilled sonographer? Anyone, any health care professional can operate that machine. How can we make it portable and low cost so that it can be deployed in many settings, not just in a radiology center, but at point of care in lower resource settings, in community health centers, in rural settings, in a mobile unit? And that has a lot of even global health implications where mammogram is not even available or, accessible. 

So that’s kind of our vision and mission to make sure no woman ever dies from breast cancer because breast cancer is one of the most treatable cancers if detected early. Chance of survival is 99%. But unfortunately, today, the mortality rate has not decreased. Since the past twenty years is actually increasing. And in a lot of countries that, you know, lifestyle changes are happening and they’re developing countries that are becoming more industrialized, there’s actually a huge rise in breast cancer. And there’s also another alarming thing, is breast cancer in much younger women, that breast sensitivity is also very related to age. So the younger the women are, the more dense the breast tissue is.

So that means mammogram becomes very ineffective for these women. So this is really our goal: to make sure that early detection is accessible to women, regardless of where they live. So that is the mission of the company, and we’re still at the early stages. 

Q: Can you walk me through your product?

A: So our device is basically a mini robot. So this is actually our device, and it uses a transducer that is similar to what is used in a handheld probe. It gets attached to a variable accessor and gets positioned on the breast, and it can automatically scan the whole breast completely hands-free in two minutes. So that means that any healthcare professional can be trained in a couple of hours to place this on a patient and the software guides them through all the steps that they have to do and then press a button and it will scan. After the scan, the scan only takes two minutes. We do 3D reconstruction and these images are available to be reviewed by a trained physician or a radiologist. This part of the system, the imaging parts, the 3D reconstruction, and the software is already FDA-cleared. We are working in adding advanced AI. 

One of the things that is happening during a manual hand held ultrasound with a sonographer or a physician is not just a mechanical part of the scanning. That part we have automated. There’s also a lot of cognition happening that the human is looking at the tissue, finding abnormalities, trying to look at those abnormalities from different angles to make your decision if this is something that is suspicious looking and requires further evaluation. So that part right now has to happen offline. One thing that we have done is we’ve completely decoupled the acquisition from interpretation because before that the person who was doing manual scanning have to do the interpretation real time where the patient is there.

But right now we get the whole volumetric data reconstruct and can send it off-site to a remote radiologist across the country or even across the world to be able to interpret that. But this can even be improved if we can add AI that can do the inference real-time to be able to triage the patient, to be able to detect the abnormality and let the operator know that they can further follow-up with same actions there and on-site. A lot of places the patient has to travel to go to another imaging site. So we can reduce that need for travel, by adding these AI. We can also improve the workflow of the radiologists and make the diagnosis more accurate.

Q: What makes you different from other projects on the market?

A: So, our competitors right now in the market are either portable handheld ultrasound systems that, for example, Butterfly or all these big companies do have a portable system that may connect to your iPhone, and, they are called point of care ultrasound or Pocus. These systems still require a skilled physician or a sonographer to be able to operate them. There’s some AI that is being developed to let an untrained user be able to operate. But usually it’s for applications that are easier to identify an anatomy. One thing about breast tissue is that everyone’s breast tissue is very different.

Breast tissue’s like a fingerprint and it’s very heterogeneous. So things can look like they could be normal can look like something that is abnormal or malignant. So it requires a very specialized training. Even not every sonographer can do a breast ultrasound. So that’s why it’s very hard for these systems to scale to the breast applications.

A lot of them don’t even have the necessary resolution or image quality to have a breast application. On the other hand of the spectrum, there are, you know, console based, ultrasound systems that are the traditional systems. They are still used in many imaging centers with specialized sonographers. And then we have the large equipment that automate breast ultrasound. So, big companies like GE have one, Siemens has one.

There are some larger start-ups that also have one. Some of them are bed sized so the woman has to lie down, and their breast hangs, in a water tank. So there’s a lot of innovation in that space, but mostly they are positioning their device. Again, for a large imaging center and radiology centers, they’re capital equipment. They require a whole room, and, the price point is hundreds of thousands of dollars so it’s not really an accessible solution.

We are technically marrying these two worlds together to have the world’s first portable automated breast ultrasound system that does not require any skill to operate other than being a healthcare professional, and it can be taken to any setting. So we really want to bring this imaging outside of the radiology center to bring it to point of care or to where women are. It could be in a community setting. It could be in a rural setting. It could be in a mobile van. So that is how we think we can increase access. 

Q: What is the core problem you are solving, and how did you identify it?

A: So I think the biggest issue is really that operator dependency, which is a big issue in breast ultrasound. The skill of that sonographer that does the breast ultrasound completely defines the course of diagnosis. If they are not able to get good images, if they miss an area, then the radiologist that further is looking at the images cannot diagnose correctly. And that’s why, and the quality is hit and miss.

So we hear a lot of times radiologists say “I don’t trust what my technician did. I have to go in the room and back scan myself”. And it’s also very time-consuming. So we’re trying to completely take that out of the operator dependency, out of the question, make it very repeatable, consistent quality, and also do a whole breast, so that we make sure that we covered everything. So the radiologist will be confident that what they’re reading is the whole breast and also workflow.

The scan that we do with our device only takes two minutes while manual breast ultrasound can take twenty to forty minutes. So it’s both speed workflow and repeatability and quality. 

Q: Who are your primary users or customers within the healthcare system (patients, doctors, hospitals, researchers)?

A: Currently, we sell to providers. We have different customer segments that we sell to. There are, like, imaging centers that are interested in having this device in their satellite units or their mobile mammo vans.

Then there are, you know, breast surgeons. Breast surgeons see a lot of patients with breast complaints, so they would like to do this in their office. Or, when a patient is diagnosed, they want to image them with 3D reconstruction and do preoperation planning. And then we also have, like, a primary care. So the primary care OBGYNs that are already seeing patients for their annual checkups, they would like to do this as part of their annual checkup. If the woman goes to get their mammogram and they know they have this, instead of the one waiting for weeks to get scheduled for a whole breast ultrasound, they can operate this in their own office and then have a remote radiologist read it.

Q: What milestones or key achievements have you reached so far?

A: A key milestone for us is that we were able to achieve our first FDA clearance. And right now, we are in early commercialization, but major key milestones that we are working towards is starting a multisite large prospective clinical study to build a multimodality breast imaging registry to be able to train and validate the different AI modules that we’re working on. And after that, submit for our additional FDA clearances, as well as being able to deploy to sites from different types of health care settings, that I mentioned, and see how this product operates in the real world.

Q: How do you integrate AI into your solution? 

A: We use AI in the whole stack, from data acquisition all the way to the interpretation and diagnosis. We use deep learning AI, but we also have traditional, models as well. So it’s a combination, but most of them, right now, work with computer vision.

We are working on more advanced models that work with our raw data. One of the advantages of our system compared to traditional ultrasound is we actually take the raw signals out of ultrasound before they get converted to, grayscale images. So there’s a lot of rich data, about the tissue and the straw data that gets filtered, before it gets converted to images. And we are working there’s a field, it’s called quantitative ultrasound, that has shown that you can see tissue, physiological properties of the tissue from this raw data. So that’s another, field of machine learning that we’re working on.

But, in general, we have, in the acquisition image processing, we have different types of models for image enhancement, for denoising. And then after the reconstruction, we have a lot of models for automatic lesion detection, classification, segmentation, breast density calculation, risk estimation that are more on the patient care side and clinical decision support. 

Q: How is AI helping (company) achieve their goals?

A: So, AI, right now, in our products, is, you know, getting developed from day one. We have this vision that AI is a critical component of our system, and, everything was designed to have AI seamlessly integrated. So for the user, for the end user, this should not be 

something that, you know, they see, like there’s an add on.

It’s already integrated, and it makes our workflow easier and seamless. This is really what we want to achieve with AI, both in terms of image acquisition when the operator who we think is not someone who is familiar with ultrasound, they don’t know how to set up image settings. So AI can help with setting up image settings. AI can help with, knowing there’s image quality issues. So informing them what to do and how to correct and repeat imaging In the diagnosis parts, one of the things that we hear from radiologists is that these three d images, these 3D automated breast ultrasound systems take longer to read because we have so many slices.

It’s comparable to the time they spend reading an MRI or a CT. So they want this time to be faster. So our hope is AI will help them do their read faster and make their workflow, annotations, and a lot of manual things that they have to do. They have to compare things with the prior images, all of them automated so that their work will be more efficient, and they will be more confident in their final decision. 

Q: Thinking of iSono Health’s journey, what were your needs before winning the Grant? 

A: So one thing that, we have so far tried to achieve was, we’re developing models and they’re still in the R&D stage. But what is important for us as I said is the user experience and user interaction of the different users that we have. It’s the physician, and the physician can be different types of physician. It could be a radiologist. It could be a surgeon. It could be a primary care, and then there’s the operator as well. So, the experience that each of these personas has with the AI is different. We usually prototype our AI into the product and try to show a demo to them to get feedback, but that feedback session usually happens in a very sporadic way, like, when we go to trade shows or do we do have a demo.

And the design needs to get refined so that we can make sure this part of the product, which is the AI, which is the interaction of the human with the AI, gets designed properly so we’re addressing their unmet need. We don’t want to think as we hear a lot of times from radiologists that are like, oh, I don’t trust that AI. Like, every time it says something I think it’s wrong or we don’t want that experience to happen with our models. We want to make sure that we completely take into account and see them as a partner that gives us feedback of what they need because as engineers and, external people, we might have an idea what might make their life easier, but that might not be true. As we see, like, when you talk to a lot of radiologists, they say that everyone focuses on, like, all these sexier things that diagnose things, but what helps me is, like, make the job of me writing or measuring or doing things that are so time-consuming, faster for me. 

And that’s how we see, like, LLM is getting a lot of traction in radiology because the whole reporting part is, the part that is taking a lot of time from them. So this is the thing that we feel we lacked, and we’re hoping that with this grant we have a better understanding of the real user need and the experience and interaction they want to have with AI. 

Q: What are your expectations after we finish the Grant process?

A: That we do have a good idea of how this design gets prototyped and how we’re going to implement it, because this is our next step to have, like, the refined design implemented with the models that are getting trained and ready to be validated. Because the validation of our models, because models are already integrated in our product, it’s not just the model performance. It also it’s the whole, design and the interaction of the users and see whether we are improving user performance, compared to when they didn’t have access to the AI. So this is, the validation we want to get out for our FDA submission. 

Q: How would you measure success?

A: That we achieved a design that makes the end user happy, and that they’re excited to try and work with it. And also our engineering team feels that that’s something achievable to be implemented. 

Q: Anything else you’d like to share?

A: I feel like in general, people underestimate that human-computer or human-AI interaction in this point, and the value that that thoughtfulness will bring. So I’m really happy that we have this opportunity to invest in this part and for the grants that help us to achieve our goals.