Instytut Podstawowych Problemów Techniki
Polskiej Akademii Nauk

Pracownicy

mgr inż. Piotr Jarosik

Zakład Informatyki i Nauk Obliczeniowych (ZIiNO)
Pracownia Metod Komputerowych Inżynierii Materiałowej (PMKIM)
stanowisko: programista
doktorant
telefon: (+48) 22 826 12 81 wewn.: 420
pokój: 414
e-mail:
ORCID: 0000-0001-5198-5012

Ostatnie publikacje
1.  Lewandowski M.J., Karwat P., Jarosik P., Rozbicki J., Walczak M., Smach H., A High-Speed Ultrasound Full-Matrix Capture Acquisition System for Robotic Weld Inspection, Research and Review Journal of Nondestructive Testing, ISSN: 2941-4989, DOI: 10.58286/28163, Vol.1, No.1, pp.1-6, 2023

Streszczenie:
Phased-Array Ultrasonic Technique is traditionally used for the non-destructive inspection of welds and supported by industrial-grade inspection equipment. FullMatrix Capture (FMC) with Total Focusing Method (TFM) provide new capabilities and multimodal imaging, but available commercial scanners have limitations in acquisition speed (30–300MB/s) and reconstruction speed. Our goal was to develop a solution for FMC acquisition that can be applied to high-speed robotized weld scanning (speed of 100 mm/s with a resolution of 1 mm). For FMC acquisition, we have applied a portable programmable ultrasound research system us4R-lite™ (us4us Ltd., Poland) in a 64:256 channel configuration and standard angled 32-element Phased-Array probes. The system can acquire and store raw RF or demodulated I/Q data at a speed of 2–6 GB/s, enabling real-time FMC at high speed. Data can be stored on a PC during scanning and processed by a high-performance GPU. We have successfully tested our experimental setup while scanning flat-section welds with a motorized scanner at a speed approaching 100 mm/s. The acquisition and processing software developed uses Nvidia CUDA on GPU and can manage real-time storage and scanning. Next, we are planning to integrate the solution into an industrialgrade high-speed FMC acquisition system with embedded GPU processing.

Słowa kluczowe:
Ultrasonic Testing (UT) (4285), robotic inspection (23), PAUT (42), FMC (16), TFM (28), GPU processing

Afiliacje autorów:
Lewandowski M.J. - inna afiliacja
Karwat P. - inna afiliacja
Jarosik P. - inna afiliacja
Rozbicki J. - inna afiliacja
Walczak M. - inna afiliacja
Smach H. - IPPT PAN
2.  Byra M., Jarosik P., Dobruch-Sobczak K., Klimonda Z., Piotrzkowska-Wróblewska H., Litniewski J., Nowicki A., Joint segmentation and classification of breast masses based on ultrasound radio-frequency data and convolutional neural networks, Ultrasonics, ISSN: 0041-624X, DOI: 10.1016/j.ultras.2021.106682, Vol.121, pp.106682-1-9, 2022

Streszczenie:
In this paper, we propose a novel deep learning method for joint classification and segmentation of breast masses based on radio-frequency (RF) ultrasound (US) data. In comparison to commonly used classification and segmentation techniques, utilizing B-mode US images, we train the network with RF data (data before envelope detection and dynamic compression), which are considered to include more information on tissue’s physical properties than standard B-mode US images. Our multi-task network, based on the Y-Net architecture, can effectively process large matrices of RF data by mixing 1D and 2D convolutional filters. We use data collected from 273 breast masses to compare the performance of networks trained with RF data and US images. The multi-task model developed based on the RF data achieved good classification performance, with area under the receiver operating characteristic curve (AUC) of 0.90. The network based on the US images achieved AUC of 0.87. In the case of the segmentation, we obtained mean Dice scores of 0.64 and 0.60 for the approaches utilizing US images and RF data, respectively. Moreover, the interpretability of the networks was studied using class activation mapping technique and by filter weights visualizations.

Słowa kluczowe:
breast mass classification, breast mass segmentation, convolutional neural networks, deep learning, quantitative ultrasound, ultrasound imagin

Afiliacje autorów:
Byra M. - IPPT PAN
Jarosik P. - IPPT PAN
Dobruch-Sobczak K. - IPPT PAN
Klimonda Z. - IPPT PAN
Piotrzkowska-Wróblewska H. - IPPT PAN
Litniewski J. - IPPT PAN
Nowicki A. - IPPT PAN
140p.
3.  Byra M., Jarosik P., Szubert A., Galperine M., Ojeda-Fournier H., Olson L., Comstock Ch., Andre M., Andre M., Breast mass segmentation in ultrasound with selective kernel U-Net convolutional neural network, Biomedical Signal Processing and Control, ISSN: 1746-8094, DOI: 10.1016/j.bspc.2020.102027, Vol.61, pp.102027-1-10, 2020

Streszczenie:
In this work, we propose a deep learning method for breast mass segmentation in ultrasound (US). Variations in breast mass size and image characteristics make the automatic segmentation difficult. To addressthis issue, we developed a selective kernel (SK) U-Net convolutional neural network. The aim of the SKswas to adjust network's receptive fields via an attention mechanism, and fuse feature maps extractedwith dilated and conventional convolutions. The proposed method was developed and evaluated usingUS images collected from 882 breast masses. Moreover, we used three datasets of US images collectedat different medical centers for testing (893 US images). On our test set of 150 US images, the SK-U-Netachieved mean Dice score of 0.826, and outperformed regular U-Net, Dice score of 0.778. When evaluatedon three separate datasets, the proposed method yielded mean Dice scores ranging from 0.646 to 0.780. Additional fine-tuning of our better-performing model with data collected at different centers improvedmean Dice scores by ~6%. SK-U-Net utilized both dilated and regular convolutions to process US images. We found strong correlation, Spearman's rank coefficient of 0.7, between the utilization of dilated convo-lutions and breast mass size in the case of network's expansion path. Our study shows the usefulness ofdeep learning methods for breast mass segmentation. SK-U-Net implementation and pre-trained weightscan be found at github.com/mbyr/bus_seg.

Słowa kluczowe:
attention mechanism, breast mass segmentation, convolutional neural networks, deep learning, receptive field, ultrasound imaging

Afiliacje autorów:
Byra M. - IPPT PAN
Jarosik P. - inna afiliacja
Szubert A. - inna afiliacja
Galperine M. - inna afiliacja
Ojeda-Fournier H. - University of California (US)
Olson L. - University of California (US)
Comstock Ch. - Memorial Sloan-Kettering Cancer Center (US)
Andre M. - University of California (US)
140p.
4.  Jarosik P., Klimonda Z., Lewandowski M., Byra M., Breast lesion classification based on ultrasonic radio-frequency signals using convolutional neural networks, Biocybernetics and Biomedical Engineering, ISSN: 0208-5216, DOI: 10.1016/j.bbe.2020.04.002, Vol.40, No.3, pp.977-986, 2020

Streszczenie:
We propose a novel approach to breast mass classification based on deep learning models that utilize raw radio-frequency (RF) ultrasound (US) signals. US images, typically displayed by US scanners and used to develop computer-aided diagnosis systems, are reconstructed using raw RF data. However, information related to physical properties of tissues present in RF signals is partially lost due to the irreversible compression necessary to make raw data readable to the human eye. To utilize the information present in raw US data, we develop deep learning models that can automatically process small 2D patches of RF signals and their amplitude samples. We compare our approach with classification method based on the Nakagami parameter, a widely used quantitative US technique utilizing RF data amplitude samples. Our better performing deep learning model, trained using RF signals and their envelope samples, achieved good classification performance, with the area under the receiver attaining operating characteristic curve (AUC) and balanced accuracy of 0.772 and 0.710, respectively. The proposed method significantly outperformed the Nakagami parameter-based classifier, which achieved AUC and accuracy of 0.64 and 0.611, respectively. The developed deep learning models were used to generate parametric maps illustrating the level of mass malignancy. Our study presents the feasibility of using RF data for the development of deep learning breast mass classification models.

Słowa kluczowe:
breast lesion classification, convolutional neural networks, deep learning, radio-frequency signals, ultrasound imaging

Afiliacje autorów:
Jarosik P. - IPPT PAN
Klimonda Z. - IPPT PAN
Lewandowski M. - IPPT PAN
Byra M. - IPPT PAN
100p.

Lista rozdziałów w ostatnich monografiach
1. 
Kidziński Ł., Mohanty S.P., Ong C.F., Huang Z., Zhou S., Pechenko A., Stelmaszczyk A., Jarosik P., Pavlov M., Kolesnikov S., Plis S., Chen Z., Zhang Z., Chen J., Shi J., Zheng Z., Yuan Ch., Lin Z., Michalewski H., Milos P., Osinski B., Melnik A., Schilling M., Ritter H., Carroll S.F., Hicks J., Levine S., Salathé M., Delp S., The NIPS '17 Competition: Building Intelligent Systems, rozdział: Learning to Run Challenge Solutions: Adapting Reinforcement Learning Methods for Neuromusculoskeletal Environments, Springer, pp.121-153, 2018

Prace konferencyjne
1.  Byra M., Jarosik P., Karwat P., Klimonda Z., Lewandowski M., Implicit Neural Representations for Speed-of-Sound Estimation in Ultrasound, UFFC-JS, 2024 IEEE Ultrasonics, Ferroelectrics, and Frequency Control Joint Symposium, 2024-09-22/09-26, Tajpej (TW), pp.1-4, 2024

Streszczenie:
Accurate estimation of the speed-of-sound (SoS) is important for ultrasound (US) image reconstruction techniques and tissue characterization. Various approaches have been proposed to calculate SoS, ranging from tomography-inspired algorithms like CUTE to convolutional networks, and more recently, physics-informed optimization frameworks based on differentiable beamforming. In this work, we utilize implicit neural representations (INRs) for SoS estimation in US. INRs are a type of neural network architecture that encodes continuous functions, such as images or physical quantities, through the weights of a network. Implicit networks may overcome the current limitations of SoS estimation techniques, which mainly arise from the use of non-adaptable and oversimplified physical models of tissue. Moreover, convolutional networks for SoS estimation, usually trained using simulated data, often fail when applied to real tissues due to out-of-distribution and data-shift issues. In contrast, implicit networks do not require extensive training datasets since each implicit network is optimized for an individual data case. This adaptability makes them suitable for processing US data collected from varied tissues and across different imaging protocols. We evaluated the proposed SoS estimation method based on INRs using data collected from a tissue-mimicking phantom containing four cylindrical inclusions, with SoS values ranging from 1480 m/s to 1600 m/s. The inclusions were immersed in a material with an SoS value of 1540 m/s. In experiments, the proposed method achieved strong performance, clearly demonstrating the usefulness of implicit networks for quantitative US applications.

Słowa kluczowe:
beamforming, deep learning, implicit neural representations, speed-of-sound, quantitative ultrasound

Afiliacje autorów:
Byra M. - IPPT PAN
Jarosik P. - IPPT PAN
Karwat P. - IPPT PAN
Klimonda Z. - IPPT PAN
Lewandowski M. - IPPT PAN
2.  Jarosik P., Byra M., Klimonda Z., Dłużewski P., Lewandowski M., Deep Reinforcement Learning Approach for Adaptive Ultrasound Image Reconstruction with a Flexible Array Probe, UFFC-JS, 2024 IEEE Ultrasonics, Ferroelectrics, and Frequency Control Joint Symposium, 2024-09-22/09-26, Tajpej (TW), No.8573, pp.62-62, 2024

Streszczenie:
Background, Motivation and Objective
Flexible ultrasound (US) arrays are a promising technology that may further democratize US technology — e.g. in wearable US. Flexible transducers also pose challenges in image reconstruction, as they require adaptable beamforming delays due to a changing geometry of the probe. Various approaches have been proposed for flexible array shape estimation and beamforming, e.g. external sensors, deep learning and optimization. In this work, we propose a deep reinforcement learning (DRL) approach, where a software agent is responsible for tracking the array shape to properly reconstruct US B-mode image.

Statement of Contribution/Methods
Here we considered a reinforcement learning environment as a setup consisting of a US system with the flexible array and point targets phantom. The environment was simulated using j-Wave software. The environment’s state consisted of the current shape of the array, modeled as a sinusoid s = a sin(bx + c), and the current model of the array assumed by the beamformer: s’ = a’ sin(b’x + c’) (single-element STA scheme). A single episode consisted of
7 steps; the parameters a, b, c could vary from step to step (within bounds of physical constraints). The agent observed the current B-mode, could modify the current values of a’, b’ and c’ (action) and received a reward equal to the linear combination of the coherence factor and structural similarity index measure (SSIM) between the current and reference image. We trained our agent using TD3 approach and tested it for various settings of a, b and c.

Results/Discussion
Our agent achieved an average SSIM of 0.73 per episode step. Figure 1 shows the sequence of states and images within an example episode; the agent was able to correctly react to the change of the array shape. The DRL approach has the following advantages compared to other methods: the agent can be trained to operate in an environment with a changing state; and the agent can be trained to maximize expected return (dependent on beamforming quality metric), which does not have to be differentiable.

Fig. 1. (Top) B-mode images, estimated array shape (red) and actual shape (black) within an example episode. (Bottom) Coherence factor and SSIM achieved by the agent during the episode

Afiliacje autorów:
Jarosik P. - IPPT PAN
Byra M. - IPPT PAN
Klimonda Z. - IPPT PAN
Dłużewski P. - IPPT PAN
Lewandowski M. - IPPT PAN
3.  Jarosik P., Lewandowski M., Klimonda Z., Byra M., Pixel-wise deep reinforcement learning approach for ultrasound image denoising, IUS, IEEE International Ultrasonics Symposium (IUS), 2021, 2021-09-11/09-16, on-line (US), DOI: 10.1109/IUS52206.2021.9593591, pp.1-4, 2021

Streszczenie:
Ultrasound (US) imaging is widely used for the tissue characterization. However, US images commonly suffer from speckle noise, which degrades perceived image quality. Various deep learning approaches have been proposed for US image denoising, but most of them lack the interpretability of how the network is processing the US image (black box problem). In this work, we utilize a deep reinforcement learning (RL) approach, the pixelRL, to US image denoising. The technique utilizes a set of easily interpretable and commonly used filtering operations applied in a pixel-wise manner. In RL, software agents act in an unknown environment and receive appropriate numerical rewards. In our case, each pixel of the input US image has an agent and state of the environment is the current US image. Agents iteratively denoise the US image by executing the following pixel-wise pre-defined actions: Gaussian, bilateral, median and box filtering, pixel value increment/decrement and no action. The proposed approach can be used to generate action maps depicting operations applied to process different parts of the US image. Agents were pre-trained on natural gray-scale images and evaluated on the breast mass US images. To enable the evaluation, we artificially corrupted the US images with noise. Compared with the reference (noise free US images), filtration of the images with the proposed method increased the average peak signal-to-noise ratio (PSNR) score from 14 dB to 26 dB and increased the structure similarity index score from 0.22 to 0.54. Our work confirms that it is feasible to use pixel-wise RL techniques for the US image denoising.

Słowa kluczowe:
deep reinforcement learning, ultrasound imaging, image denoising, filtration, breast cancer

Afiliacje autorów:
Jarosik P. - IPPT PAN
Lewandowski M. - IPPT PAN
Klimonda Z. - IPPT PAN
Byra M. - IPPT PAN
4.  Lewandowski M., Jarosik P., Tasinkevych Y., Walczak M., Efficient GPU implementation of 3D spectral domain synthetic aperture imaging, IUS 2020, IEEE International Ultrasonics Symposium, 2020-09-07/09-11, Las Vegas (US), DOI: 10.1109/IUS46767.2020.9251552, pp.1-3, 2020

Streszczenie:
In this work, we considered the implementation of a 3D volume reconstruction algorithm for single plane-wave ultrasound insonification. We review the theory behind the Hybrid Spectral-Domain Imaging (HSDI) algorithm, provide details of the algorithm implementation for Nvidia CUDA GPU cards, and discuss the performance evaluation results. The average time required to reconstruct a single data volume using our GPU implementation of the HSDI algorithm was 22 ms. We also present an iso-surface extraction result using a marching cubes algorithm. Our work constitutes a preliminary research for further development and implementation of 3D volume reconstruction using GPU implementation of the spectral domain imaging algorithm.

Słowa kluczowe:
ultrasound imaging, 3D ultrasound, volumetric imaging, gpu

Afiliacje autorów:
Lewandowski M. - IPPT PAN
Jarosik P. - inna afiliacja
Tasinkevych Y. - IPPT PAN
Walczak M. - IPPT PAN
20p.
5.  Jarosik P., Lewandowski M., Automatic Ultrasound Guidance Based on Deep Reinforcement Learning, IUS 2019, IEEE, International Ultrasonics Symposium, 2019-10-06/10-09, Glasgow (GB), DOI: 10.1109/ULTSYM.2019.8926041, pp.475-478, 2019

Streszczenie:
Ultrasound is becoming the modality of choice for everyday medical diagnosis, due to its mobility and decreasing price. As the availability of ultrasound diagnostic devices for untrained users grows, appropriate guidance becomes desirable. This kind of support could be provided by a software agent, who easily adapts to new conditions, and whose role is to instruct the user on how to obtain optimal settings of the imaging system during an examination. In this work, we verified the feasibility of implementing and training such an agent for ultrasound, taking the deep reinforcement learning approach. The tasks it was given were to find the optimal position of the transducer’s focal point (FP task) and to find an appropriate scanning plane (PP task). The ultrasound environment consisted of a linear-array transducer acquiring information from a tissue phantom with cysts forming an object-of-interest (OOI). The environment was simulated in the Field-II software. The agent could perform the following actions: move the position of the probe to the left/right, move focal depth upwards/downwards, rotate the probe clockwise/counter-clockwise, or do not move. Additional noise was applied to the current probe setting. The only observations the agent received were B-mode frames. The agent acted according to stochastic policy modeled by a deep convolutional neural network, and was trained using the vanilla policy gradient update algorithm. After the training, the agent’s ability to accurately locate the position of the focal depth and scanning plane improved. Our preliminary results confirmed that deep reinforcement learning can be applied to the ultrasound environment.

Słowa kluczowe:
ultrasound guidance, reinforcement learning, deep learning

Afiliacje autorów:
Jarosik P. - inna afiliacja
Lewandowski M. - IPPT PAN
6.  Jarosik P., Lewandowski M., The feasibility of deep learning algorithms integration on a GPU-based ultrasound research scanner, IUS 2017, IEEE International Ultrasonics Symposium, 2017-09-06/09-09, Washington (US), DOI: 10.1109/ULTSYM.2017.8091750, pp.1-4, 2017

Streszczenie:
Ultrasound medical diagnostics is a real-time modality based on a doctor's interpretation of images. So far, automated Computer-Aided Diagnostic tools were not widely applied to ultrasound imaging. The emerging methods in Artificial Intelligence, namely deep learning, gave rise to new applications in medical imaging modalities. The work's objective was to show the feasibility of implementing deep learning algorithms directly on a research scanner with GPU software beamforming. We have implemented and evaluated two deep neural network architectures as part of the signal processing pipeline on the ultrasound research platform USPlatform (us4us Ltd., Poland). The USPlatform is equipped with a GPU cluster, enabling full software-based channel data processing as well as the integration of open source Deep Learning frameworks. The first neural model (S-4-2) is a classical convolutional network for one-class classification of baby body parts. We propose a simple 6-layer network for this task. The model was trained and evaluated on a dataset consisting of 786 ultrasound images of a fetal training phantom. The second model (Gu-net) is a fully convolutional neural network for brachial plexus localisation. The model uses 'U-net'-like architecture to compute the overall probability of target detection and the probability mask of possible target locations. The model was trained and evaluated on 5640 ultrasound B-mode frames. Both training and inference were performed on a multi-GPU (Nvidia Titan X) cluster integrated with the platform. As performance metrics we used: accuracy as a percentage of correct answers in classification, dice coefficient for object detection, and mean and std. dev. of a model's response time. The 'S-4-2' model achieved 96% classification accuracy and a response time of 3 ms (334 predictions/s). This simple model makes accurate predictions in a short time. The 'Gu-net' model achieved a 0.64 dice coefficient for object detection and a 76% target's presence classification accuracy with a response time of 15 ms (65 predictions/s). The brachial plexus detection task is more challenging and requires more effort to find the right solution. The results show that deep learning methods can be successfully applied to ultrasound image analysis and integrated on a single advanced research platform

Słowa kluczowe:
Ultrasonic imaging, Neural networks, Convolution, Machine learning, Image segmentation, Kernel

Afiliacje autorów:
Jarosik P. - inna afiliacja
Lewandowski M. - IPPT PAN
20p.

Abstrakty konferencyjne
1.  Dłużewski P., Jarosik P., Atomistic reconstruction of dislocations based on tensor algebra of lattice distortion fields, ICMM8, 8th International Conference on Material Modelling , 2024-07-15/07-17, Londyn (GB), pp.39-39, 2024

Streszczenie:
Atomistic models of dislocations in crystalline structures are often obtained by means of elastic-
plastic relaxation of a perfect crystal lattice subjected to external loading. Another method is based
on inserting of single dislocations into the perfect lattice. In this case the analytic formulas for the
glide of a single dislocation in elastic continuum are used. The methods mentioned above do not
give the possibility for emerging atomistic model of an arbitrary chosen network of dislocations. This
problem concerns many sets of dislocations observed by means of high resolution transmission
electron microscopy. In this presentation we introduce a deterministic method for obtaining atomistic
models of dislocations. The method is based on the use of symbolic algebra of elemental lattice
distortion tensor fields. Contrary to the linear strain and rotation measures, the lattice distortion
tensor is the correct measure of finite deformation. Thus, on the basis of distortion field, many
different tensor fields of finite strains and rotations can be determined uniquely. This enables
generation of atomistic models in terms of finite deformation approach [1,2]. The method presented
here links: (i) the analytic formulas for lattice distortions derived from the linear theory of dislocations,
(ii) the finite deformation algebra of distortion fields, and (iii) the atom-by-atom reconstruction of
dislocations including their core structures. This method has been implemented in a visual editor of
dislocations. Configurations of atoms obtained in this way satisfy the stress equilibrium equations in
terms of linear elasticity. On the other hand, the spatial Burgers vectors of dislocations are stretched
and rotated to each other according to the finite deformation theory. The resultant net of atoms can
used as input data to ab-initio and/or molecular dynamics programs to find a low energy configuration
corresponding to a given interatomic potential.

[1] Łażewski J., Jochym P.T., Piekarz P., Sternik M., Parlinski K., Cholewiński J., Dłużewski P.,
Krukowski S., J. Mater. Sci. 54, 10737-10745, 2019.
[2] Cholewiński J., Maździarz M., Jurczak G., Dłużewski P., Int. J. Multiscale Comp. Eng. 9, 411-421,
2014.

Afiliacje autorów:
Dłużewski P. - IPPT PAN
Jarosik P. - IPPT PAN
2.  Cacko D., Jarosik P., Lewandowski M., Real-time Shear Wave Elastography Implementation on a Portable Research Ultrasound System with GPU-accelerated Processing, IEEE IUS 2023, International Ultrasonics Symposium (IUS) , 2023-09-03/09-08, Monteral (CA), DOI: 10.1109/IUS51837.2023.10307608, pp.1-4, 2023

Streszczenie:
In this work, we present a low-cost, portable, and fully configurable ultrasound system implementing 2-D real-time Shear Wave Elastography (SWE) imaging mode. To achieve that we have enhanced the transmit capabilities of the 256 TX/64 RX us4R-lite research system, developed by our team, to support push pulses generation. This system was combined with a signal processing pipeline reconstructing stiffness maps from raw RF data. Real-time imaging performance was provided by an efficient reconstruction algorithm execution that incorporated graphics processing unit (GPU). The overall system performance was assessed experimentally using an industry-standard elasticity Q/A phantom. Relevant reconstruction parameters were evaluated in terms of reconstruction time. The system achieved stiffness estimation with a bias <5% and SNR of 30 dB and was able to detect lesions of size >4 mm and various stiffness with CNR in the range of 13–17 dB. The system throughput of up to 5 fps has been achieved on a PC notebook equipped with NVIDIA RTX 3060 GPU.

Afiliacje autorów:
Cacko D. - IPPT PAN
Jarosik P. - IPPT PAN
Lewandowski M. - IPPT PAN
3.  Dłużewski P., Jarosik P., Reconstruction of atomistic models of dislocation networks, based on lattice distortion tensor fields algebra, The 5-th Polish Congress of Mechanics and the 25-th International of Computer Methods in Mechanics, 2023-09-03/09-07, Gliwice (PL), pp.1, 2023

Słowa kluczowe:
Ab-initio, atomistic modeling, tensor fields, dislocation fields algebra, lattice distortions, visualization methods

Afiliacje autorów:
Dłużewski P. - IPPT PAN
Jarosik P. - IPPT PAN
4.  Tauzowski P., Jarosik P., Żarski M., Wójcik B., Ostrowski M., Blachowski B., Jankowski Ł., Computer vision-based inspections of civil infrastructure, Modelling in Mechanics 2022, 2022-05-26/05-27, Rožnov pod Radhoštěm (CZ), pp.1-7, 2022

Streszczenie:
The uNET neural network architecture has shown very promising results when applied to semantic segmentation of biomedical images. The aim of this work is to check whether this architecture is equally applicable to semantic segmentation distinguishing the structural elements of railway viaducts. Artificial images generated by a computer graphics program rendering the 3D model of the viaduct in a photorealistic manner will be used as data sets. This approach produces a large number of
images that provide a solid training set for machine learning model.

Słowa kluczowe:
Computer vision, deep learning, semantic segmentation

Afiliacje autorów:
Tauzowski P. - IPPT PAN
Jarosik P. - IPPT PAN
Żarski M. - Institute of Theoretical and Applied Informatics, Polish Academy of Sciences (PL)
Wójcik B. - Institute of Theoretical and Applied Informatics, Polish Academy of Sciences (PL)
Ostrowski M. - IPPT PAN
Blachowski B. - inna afiliacja
Jankowski Ł. - IPPT PAN
5.  Jarosik P., Byra M., Lewandowski M., Waveflow - Towards Integration of Ultrasound Processing with Deep Learning, IUS 2018, IEEE International Ultrasonics Symposium, 2018-10-22/10-25, KOBE (JP), pp.1-3, 2018

Streszczenie:
The ultimate goal of this work is a real-time processing framework for ultrasound image reconstruction augmented with machine learning. To attain this, we have implemented WaveFlow – a set of ultrasound data acquisition and processing tools for TensorFlow. WaveFlow includes: ultrasound Environments (connection points between the input raw ultrasound data source and TensorFlow) and signal processing Operators (ops) library. Raw data can be processed in real-time using algorithms available both in TensorFlow and WaveFlow. Currently, WaveFlow provides ops for B-mode image econstruction (beamforming), signal processing and quantitative ultrasound. The ops were implemented both for the CPU and GPU, as well as for built-in automated tests and benchmarks. To demonstrate WaveFlow’s performance, ultrasound data were acquired from wire and cyst phantoms and elaborated using selected sequences of the ops. We implemented and valuated: Delay-and-Sum beamformer, synthetic transmit aperture imaging (STAI), planewave imaging (PWI), envelope detection algorithm and dynamic range clipping. The benchmarks were executed on the NVidiaR Titan X GPU integrated in the USPlatform research scanner (us4us Ltd., Poland). We achieved B-mode image reconstruction frame rates of 55 fps, 17 fps for the STAI and the PWI algorithms, respectively. The results showed the feasibility of realtime ultrasound image reconstruction using WaveFlow operatorsin the TensorFlow framework. WaveFlow source code can be found at github.com/waveflow-team/waveflow.

Afiliacje autorów:
Jarosik P. - inna afiliacja
Byra M. - IPPT PAN
Lewandowski M. - IPPT PAN

Kategoria A Plus

IPPT PAN

logo ippt            ul. Pawińskiego 5B, 02-106 Warszawa
  +48 22 826 12 81 (centrala)
  +48 22 826 98 15
 

Znajdź nas

mapka
© Instytut Podstawowych Problemów Techniki Polskiej Akademii Nauk 2024