We are excited to share a new project we have been working on in collaboration with Hungary’s leading medical research university – Semmelweis. This project focuses on using artificial intelligence and image recognition technologies to improve the accuracy and efficiency of breast cancer screenings.
The Power of AI in Detecting Breast Cancer
Early detection and prevention are crucial in the fight against breast cancer, and recent advancements in technology have made it possible for healthcare workers to receive computer assistance in examining mammograms and identifying problematic areas. The integration of machine learning and image recognition technologies in the medical field has the potential to revolutionize breast cancer screening, making it more accurate and efficient.
However, the widespread adoption of these artificial intelligence-based solutions will not be possible without good products with great user experience. A user-friendly interface will make it easier for healthcare workers to use these technologies and improve patient outcomes, making it a crucial component in the fight against breast cancer.
As a recognized leader in medical research, Semmelweis University has a long-standing reputation for producing cutting-edge advancements in the field. We are proud to have the opportunity to partner with such an esteemed institution and contribute to their ongoing efforts to improve medical outcomes and advance the field of medicine.
Implementing AI with a User-Friendly Interface
The primary objective of this project was to make the workings of the algorithm more visually accessible to medical professionals. The goal was to design a platform that could be run on any device with network connectivity, either through a standalone application or using Docker technology. This would allow us to demonstrate the algorithm’s capabilities on-site, making it easier for those considering its use to get a hands-on understanding of its capabilities.
During the image processing and annotation stage, the application provides real-time feedback in the form of a visual animation and progress bar. This helps users keep track of the analysis as it progresses and gives them a sense of the speed and efficiency of the algorithm. Once all the images have been processed, the application highlights those that show an increased risk, based on the annotations, in an interactive gallery. This gallery provides a clear and easy-to-understand representation of the algorithm’s results, making it a valuable tool for both users and potential adopters.
The software is intentionally slowed down for presentation purposes. It’s much, much faster, of course.
The Algorithm: Using Faster-RCNN and VGG16
The image detection algorithm at the core of this project uses a state-of-the-art region-based deep convolutional neural network called Faster-R-CNN. This powerful model was specifically designed for object detection and proved to be an effective tool for identifying problematic regions in mammograms. The base network used in the model was VGG16, a highly regarded 16-layer deep convolutional neural network that can be easily obtained from the PyTorch website. To make the algorithm even more effective, we further trained it to detect two different types of objects in mammogram images: benign and malignant lesions.
The output of the algorithm is not just a simple diagnosis, but a comprehensive report that includes a score reflecting the confidence level in the diagnosis for each detected lesion. The algorithm also generates a modified image that clearly highlights the locations of the detected lesions by overlaying bounding boxes on the original mammogram. This makes it easy for healthcare workers to understand and interpret the results of the analysis.
The Backend – Frontend Integration
The backend is responsible for managing the running of the algorithm and ensuring that the results are promptly sent to the frontend once the analysis of an image is completed. The input images are first sent to the frontend, where they are overlaid with a scanline animation, providing a visual indication that the analysis is underway. As soon as the results are available, they are displayed on the frontend as a simple red/green overlay and a small animation before transitioning to the next image.
To avoid any potential performance issues, the algorithm is run in serial for all images, as running it in parallel would quickly cause problems when using only the CPU and system memory. However, the CUDA Python SDK provides the ability to automatically use the CPU if a dedicated GPU cannot be found, making it possible to use the algorithm even on basic devices, albeit with reduced efficiency.
When a suitable nVidia GPU is available, the algorithm can be run in larger batches, providing much faster results. To get the results back to the frontend, we used Socket.io, as it allows for real-time communication between the backend and frontend, allowing us to directly push data from the backend to the frontend as soon as the algorithm finishes processing an image.
The images that have a confidence score below a certain threshold are considered “negative,” indicating that they are likely healthy. These images are presented in a distinctive way, with a small scale-out-scale-in animation. This animation is achieved through the use of CSS animation, utilizing the scale transform function.
Deploying the Application with Docker
The entire application is packaged in a docker image, making it more accessible and easier to run and distribute. This approach has a number of advantages, one of which is the ability to deploy the application to a cloud service, which opens up the possibility of accessing it from anywhere with an internet connection.
However, it is important to consider the architecture of the system you will run the application on, as this can impact which base image you will choose. For example, on M1 Macs, arm64 images are required, and attempting to run other images may result in errors. By utilizing a docker image, the project benefits from the portability and compatibility provided by this technology, allowing for seamless deployment and usage across different platforms.
Node, Express and React under the hood
In the implementation of this project, we chose to utilize a combination of Node.js and Express for the backend and React for the frontend. This choice was made based on the strengths and capabilities of these technologies, which provided the ideal foundation for the application’s needs. However, it is worth noting that this design may not be the only possible solution, and the application could also be implemented as an Electron app, which is a popular framework for creating desktop applications. This versatility highlights the flexibility of the project and its ability to adapt to different environments and technologies. The key is to find the right tool for the job, and in this case, we found that Node.js, Express, and React provided the optimal solution for our needs.
How RisingStack can help with your AI project
As businesses and corporations look to harness the power of Artificial Intelligence, there’s a growing demand for software development companies that can help implement these AI models and create web-based user interfaces to accompany them. That’s where RisingStack can help you.
In the past couple of months, we created several custom AI solutions for businesses and institutions of all sizes, just to name a few:
- Using AI to automatically generate product names and descriptions for webshop engines.
- Creating easy-read text for children with disabilities
- Sentiment analysis and automatic answers in the hospitality industry.
- Pinpointing breast cancer using neural networks.
Whether you’re looking to create a custom AI model to help streamline your business processes, or you’re looking to build a web-based UI that provides users with a more engaging and interactive experience, we have the skills and expertise to help you achieve your goals.
Get in touch with us to learn more about how we can help you implement AI models and create web-based UI-s that drive results for your business.