The proposed multi-modal usable tele-monitoring system for the isolation ICU consisted of three parts: a medical device panel with video image processing facilities, a transmission part with a packet that included video data and patient respiratory data, and a tele-monitoring part comprising of a real-time viewer, a database, and analysis tools. The system was connected to the ventilator, one of the most commonly used equipment in the ICU (Fig. 1).

Figure 1
figure 1

To enable remote access, a miniPC was set up for both LAN-based (wired) and WiFi-based (wireless) communication. Dual cameras were connected to the bottom of the ventilator monitor and directly to the miniPC via a USB port. The miniPC was connected to a server computer to establish the tele-monitoring system. Figure 2 provides an overview of the components of the tele-monitoring system, including the medical device panel with dual cameras for video image processing and the transmission part with a packet that includes video data and patient respiratory data. The tele-monitoring part comprises a real-time viewer, database, and analysis tools. Medical staff outside the ICU can remotely access the data received from the miniPC through the server computer. The specifications of the cameras used in the system are listed in Supplementary Table S2.

Figure 2
figure 2

Communication

The tele-monitoring system was developed based on the Python 3.9 programming language. For Servo-i ventilators, the external equipment can be connected via the RS232C serial interface. The ventilator supports baud rate 9600, data format ASCII or binary, and software handshake. The miniPC (PN51E1-B, ASUS, Taiwan) was manufactured by ASUS, and a CPU of AMD Ryzen 75700U with 32 GB RAM was employed. The miniPC and server computer were connected via Transmission Control Protocol/Internet Protocol (TCP/IP) communication. The miniPC transmits the image processed video of the ventilator screens captured by the dual cameras and raw data received from the ventilator to the server computer. Images were compressed by JPG to prevent data loss, and data of 800 × 600 images, such as the sizes of the image and ventilator screen, respectively, were transmitted using TCP/IP communication protocols. After image processing, image data were transmitted to the miniPC every 10 fps (100 ms), and data received from the ventilator during image processing were configured and transmitted in packets. Curve data were monitored by implementing the pressure, flow, and volume data as a real-time graph at a sampling rate of 50 Hz (20 ms). The packet was composed of image length, image data, raw data length, and raw data. Figure 3 shows a visual representation of the system.

Figure 3
figure 3

Ventilator data description

The tele-monitoring system communicated with the ventilator through a computer interface emulator (CIE) protocol provided by Maquet. The data from the ventilator were divided into four types: curve channel with several waveform, breath channel with numeric, setting channel with ventilator numeric, and alarm channel with numeric and character types that include the time of occurrence, description, and sound. Although 180 channels could be extracted from the ventilator, only 79 major channels were required and extracted owing to data traffic limitations. The extracted data are summarized in Table 1 and the important channels are shown in bold. Experienced staff from respiratory medicine participated in selecting the important ventilator data channels. This study was approved by the Institutional Review Board of the Seoul Asan Medical Center Hospital (IRB 2021-2864). Direct data on patients were not used in the study.

Table 1 Channel configurations.

To enable real-time communication, packet data were transmitted via the CIE protocol every 100 ms. The ventilator provided volume and pressure curve data only in extended mode, and the flow data in the form of curved channels were calculated in real time from the received volume data. Each channel was stored as a CSV file every minute.

Image processing

The camera used in the system had a low light angle camera module (Arducam) and could capture video with a resolution of 1920 × 1080 at 30 fps. The camera is based on a 1/2.8" Sony IMX291 image sensor with a resolution of 2MP (1945H × 1109 V). Also, the camera had a field of view of 100° and focusing range of 3.3 ft to infinity. The video image processing involved four steps; each step was performed in 100 ms for real-time display, resulting in a flattened video image with minimal blind spots. As shown in Fig. 4, the processing steps include Camera Calibration and Find homography using inverse perspective transformation, Image Registration, and Add Images. The first step involves correcting the radial distortion caused by the camera lens using a check board captured from multiple perspectives to measure and correct the internal or external parameters through inverse calculation29. The Camera Calibration function provided by OpenCV was used for this purpose.

Figure 4
figure 4

Flow chart of image processing.

In the second step, to transform the original image, an inverse perspective transformation was used. First, the four-corners of the ventilator panel in the original image (Fig. 5a) were identified and marked (Fig. 5)30. Using these corner positions, the transformed image (Fig. 5b) was generated through the application of inverse perspective transformation (as described in Eq. (1) below)31.

$$w\left[\begin{array}{c}{x}^{\mathrm{^{\prime}}}\\ {y}^{\mathrm{^{\prime}}}\\ 1\end{array}\right]=\left[\begin{array}{ccc}a& b& c\\ d& e& f\\ g& h& 1\end{array}\right]\left[\begin{array}{c}x\\ y\\ 1\end{array}\right],$$

(1)

where \((x,y)\) are the coordinates of the original image, and \(({x}{\prime},y{\prime})\) are the moved coordinates based on the inverse perspective transformation. In the third step, we aimed to improve the precision of our image by aligning the left and right images through image registration. We employed the RANSAC (Random Sample Consensus) algorithm as our image registration method32. The algorithm analyzed the pixel values of both images and identified common features by applying the least squares method. The transformed matrix coefficients were then obtained based on these features, ensuring consistency between the positions of features at different points. In the final step, we combined the two images by multiplying each pixel value by a weight of 0.5. This resulted in the panel remaining the same; however, the differences or obstacles were obscured, thereby partially solving the issue of information loss. Figure 6 shows the results of image processing.

Figure 5
figure 5

Inverse perspective transformation.

Figure 6
figure 6

Image processing results.

Graphic user interface

We created two graphical user interfaces (GUIs) to monitor and analyze data in real time. We used PyQt, a python library that allowed the Qt framework to be used in python code for developing these interfaces.

Figure 7 shows the live monitoring GUI, which is divided into two parts. The left part shows images obtained using two cameras with minimized blind spots. Each image was processed to find homography and image registration and then combined appropriately. These procedures were performed by the miniPC and then sent to the server computer at a rate of 10 fps (100 ms). The right part in Fig. 7 shows the data received via communication with the Servo-i ventilator, which is displayed in the same format as the Servo-i ventilator screen. This makes it convenient to operate the ventilator without any difficulty.

Figure 7
figure 7

Live monitoring GUI. Numerical data, such as peak pressure, mean pressure, peep, respiratory rate, FiO2 concentration, inspiratory and expiratory ratio, expiratory minute volume, inspiratory tidal volume, expiratory tidal volume, and waveform data, can be monitored in real time.

Figure 8 shows the data analysis GUI, which allows the user to view the channels listed in bold in Table 1. The data stored in CSV format was used to check previous data. The main channels, including 26 out of 79, used for statistical analysis are listed in Table 1. To analyze the data for these channels, we required various tools, such as box plots, median, interquartile range, 90% confidence interval, minimum, and maximum. If an alarm was triggered, the user could check the time and information to determine the type of alarm.

Figure 8
figure 8

Data analysis GUI. (a) Overview tab allows users to view stored patient data and move the slider to view the entire data. It can be used to monitor patient information, start time, and end time to save ventilator data; (b) respiratory mechanics tab shows the respiratory mechanics via boxplots; (c,d) Breath data summary tab and Setting and Alarm data summary tab, respectively, allow statistical analysis, such as determination of median, interquartile range, minimum, and maximum of the main data of stored breath and setting channels. *Hyphen indicates that no parameters that vary depending on the ventilation mode exist.

Source link