2D-无代写
时间:2024-12-18
To enable high-quality image guidance during interventional surgery, a novel imaging paradigm is
being developed at UCL where video-rate, 2D imaging can be performed using an imaging probe
measuring less than 1 mm in diameter. This is achieved using a single ultrasound sensor at the end of
a stiff imaging probe, and this probe is rapidly and repeatedly moved back-and-forth to scan the
image aperture whilst continuously recording pulse-echo signals. This imaging paradigm is depicted
in the figure below.


For the first task, download the dataset “image_data_no_recon.mat” from Moodle. Write a
script that loads this data set into the MATLAB Workspace, and plots the amplitude of the 140th
column of this set against time in µs. This column corresponds to a single pulse-echo time trace (an
“A-line”) in which you can observe two strong responses occurring at different times, corresponding
to two reflection events. The A-line signal amplitude has arbitrary units, and temporal sampling was
performed at a rate of 125 MHz.

The A-line contains both positive and negative values, which makes localising the reflection (i.e., at
which time does it occur?) somewhat ambiguous. Therefore, you should apply envelope detection
(en.wikipedia.org/wiki/Envelope_detector) to extract only the amplitude rather than the phase of
the signal. In MATLAB, this is conveniently performed by computing the absolute value of the Hilbert
transform (the hilb function in MATLAB) of the A-line. In the same window, but in a second panel,
have your script also display the envelope-detected version of the 140th A-line. Ensure you provide
meaningful axis labels and titles.



NOTE: Unless explicitly stated otherwise, parameter values mentioned throughout this assignment
should be kept constant throughout the assignment after they have been set.


Task 2 [15]

A single pulse-echo recording (A-line) is insufficient to generate an image, and therefore many such
A-lines were recorded, each at a different probe location. The data set you downloaded in task A
actually comprises 600 A-lines, each acquired 50 µm apart. In a new script, again load the data set,
and perform envelope detection for all 600 A-lines. In addition, to improve the visualisation, have
your script perform log-compression by computing

() = 20 ∙ logଵ଴ (),

Where (in dB) is the log-compressed version of the envelope-detected A-line . Your script should
normalise the log-compressed A-lines such that the maximum value observed across all 600 A-lines
is set to 0 dB. Finally, have your script display this normalised, log-compressed and envelope-
detected data set using a dynamic range of 30 dB (i.e., the colormap should range between -30 and
0 dB). Use the “hot” colormap, display the colorbar, ensure the axes are shown in millimetres and
at the correct aspect ratio, and provide meaningful labels and titles. Due to the pulse-echo nature of
the imaging paradigm, you can convert time into depth using the relation

=

2
,

where = 1500 m/s is the speed of sound of the surrounding medium.



Task 3 [20]

In the previous tasks, you have been working with the “raw” data as it is recorded by an analog-to-
digital convertor. However, ultrasound sensors typically exhibit a finite bandwidth, and any signal
outside this bandwidth merely corresponds to noise and should be disregarded. In addition, as
ultrasound propagates through the surroundings, it decreases in amplitude due to attenuation; as
you may have noticed in your visualisation in Task 2, reflection events occurring at greater depth
appear weaker whereas they should be of equal amplitude. In this task, you will correct for these
two issues.

First, you should apply a “4th order Butterworth” filter to suppress the frequencies below 8 MHz and
above 20 MHz. You should thus apply a band-pass filter. Hint: you can use the filtfilt MATLAB
function to achieve this, but other approaches could also be used.
Second, the acoustic attenuation can be compensated by applying a depth-dependent
weighting function. This technique, commonly referred to as “time-gain compensation” (TGC), is
applied by computing

୘ୋେ() = () ∙
ఘ,

where the TGC-compensated A-line ୘ୋେ() is simply obtained by multiplying the uncorrected A-
line () with depth exponentiated to the TGC power = 1.

Write a function that performs band-pass filtering, time-gain compensation, envelope-detection,
log-compression, and normalisation to 0 dB to the data (in that order – this is important!). Think
carefully about which input(s) and output(s) are required to achieve this.

Next, write a script that loads the data set, uses your function to apply these signal processing steps,
and displays the resulting data.

Task 4 [15]

As you can see in the figure shown in Task 1, the ultrasound probe does not actually emit the
ultrasound orthogonal to the direction of motion, but rather at an angle. This means that the
visualisations generated in Tasks 2 and 3 are actually distorted; each A-line should have been
displayed at the emission angle. Therefore, for this task, write a script that loads the data set and
applies the signal processing steps of Task 3. However, instead of displaying this on a
cartesian/orthogonal grid, you will need to compute the new transform coordinates ୲୰ୟ୬ୱ and
୲୰ୟ୬ୱ for every time sample in every A-line using the following equations:

୲୰ୟ୬ୱ = cos() ∙ + ଴ and
୲୰ୟ୬ୱ = sin() ∙ ,

where ଴ is the imaging probe position corresponding to the A-line and = 45° is the ultrasound
emission angle.

Write a script that repeats all steps of Task 3, but now also displays the transformed data corrected
for non-orthogonal emission, in the same window but a separate panel. The output of your script
should resemble the figure shown below.


Task 5 [15]

To further improve the image quality, the raw data you have previously considered has been
reconstructed into a much higher-quality image (using a “delay-and-sum” algorithm). Download this
reconstructed data set “image_data_DnS_recon.mat” from Moodle, and write a new script
that performs the same signal processing steps as in Tasks 3-4, but in this case use a band-pass filter
with cut-off frequencies of 0.1 and 20 MHz instead.

In addition, have your script extract the image pixel with the highest amplitude (which corresponds
to the image of one of the point targets), and extract a subset (region of interest; ROI) from the full
data set that is centered around this pixel of maximum amplitude and measures 2.5 mm in both
width and height. Add annotations to indicate the location and size of this ROI in the previously
generated visualisation, and – in the same window but in a separate panel – display the data
contained within this ROI. Maximise the visibility of this ROI by programmatically adjusting the sizes
of the two panels and/or the window. An example that contains annotations and maximises visibility
of the ROI is shown below for inspiration.




Task 6 [15]

Now that you have extracted an ROI around one of the imaging point targets, we can use this to
extract the spatial resolution of the imaging system. This is here defined as the “full width at half
maximum” (FWHM) of the signal along both the horizontal (lateral) and vertical (axial) directions,
meaning that the resolution is given by the width or height of the bright area as delineated by the -6
dB boundary. This resolution can be extracted by performing the following steps:

1. Find the location of the pixel of highest amplitude within the ROI,
2. Extract the corresponding horizontal (lateral) and vertical (axial) profiles through this
maximum pixel,
3. Work out how many data samples of these two profiles are ≥-6 dB, and
4. Convert this sample count into real-world distances.

First, write a function that extracts the lateral and axial resolutions, as well as the lateral and axial
profiles through the pixel of maximum amplitude, following the steps outlined above.

Next, in a new script apply this function to the ROI extracted in Task 5, and have this script generate
plots (in the same window but separate panels) of both the lateral and axial profiles. In addition, use
annotation to indicate the -6 dB threshold level used to extract FWHM resolution values. Finally,
display the two resolution values obtained in the plots – for instance in the titles of the panels, or
using text annotation.


Task 7 [10] Note – only to be completed if you take this module at level 7!

As the imaging probe is mechanically actuated forward and backward, flex in the probe and non-
ideal motion of the actuator result in slack in the imaging system that results in interframe “jitter”,
where it appears as if the point targets move between consecutive frames due to the change in
direction from right-to-left to left-to-right. To visualise this, download the data set
“video_data_DnS_recon.mat” from Moodle, which contains reconstructed data for five
consecutive frames, and write a script that creates a video that displays and stores these image
frames. This video should use MPEG-4 encoding, display the data at a frame rate of 5 Hz, and use a
“Quality” factor of 100 to minimise compression artefacts.

After careful investigation, it turns out that the horizontal (lateral) interframe jitter amounts to 0.9
mm, meaning that when switching from left-to-right to right-to-left motion, the actuator traverses a
distance of 0.9 mm before the imaging probe itself starts to move.

Next, modify your script to compensate for this slack by horizontally offsetting every second frame
by the jitter distance (or every frame by half that distance in opposite directions), to ensure that
images acquired in both directions of motion are properly aligned. Your script should generate a
single video displaying both the original reconstructed frames and the jitter-corrected frames side-
by-side.

essay、essay代写