• Keine Ergebnisse gefunden

A curvature study of the medial calcar region of the proximal humerus / submitted by Alexander Ploier

N/A
N/A
Protected

Academic year: 2021

Aktie "A curvature study of the medial calcar region of the proximal humerus / submitted by Alexander Ploier"

Copied!
47
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Submitted by Alexander Ploier Submitted at Industrial Mathematics Institute Supervisor a.Univ.Prof. DI Dr. Andreas Neubauer June 2019 JOHANNES KEPLER UNIVERSITY LINZ Altenbergerstraße 69 4040 Linz, ¨Osterreich www.jku.at DVR 0093696

A curvature study of the

medial calcar region of

the proximal humerus

Master Thesis

to obtain the academic degree of

Diplom-Ingenieur

in the Master’s Program

Industriemathematik

(2)

Eidesstattliche Erklärung

Ich erkläre an Eides statt, dass ich die vorliegende Masterarbeit selbstständig und ohne fremde Hilfe verfasst, andere als die angegebenen Quellen und Hilfsmittel nicht benutzt bzw. die wörtlich oder sinngemäß entnommenen Stellen als solche kenntlich gemacht habe. Die vorliegende Masterarbeit ist mit dem elektronisch übermittelten Textdokument identisch.

(3)

Abstract

This thesis deals with the study of the humerus, the area between its head and the surgical neck to be exact. When bones are broken, doctors have to put screws through them to keep the individual parts together during the healing process. In some cases this prozedure leads to a problem: especially for elderly patients, where the bones are often softer than for younger individuals, it often happens that the screws begin to pull themselves through the bones wich leads to more damage.

It was our task to investigate the curvature of the bones in the region of interest. If it were almost equal for all patients, then one could build one or more support templates. They could be put to the lower side of the bone as reinforcement, where the screws find a better purchase. To yield a good result the template should fit the form of the bone quite well.

We focused mostly on the aspect of mathematical imaging and how to prepare the image in such a way that it was easy to work with. Afterwards we determined a spline function that best possibly approximates the form of the bone in the area of interest.

The investigations for several different persons showed that indeed the form of the humerus is always very similar, it neither depends on the size nor on gender or age.

(4)

Zusammenfassung

Diese Arbeit stellt eine mathematische Studie des Oberarms dar und setzt sich im Speziellen mit dem Bereich zwischen Oberarmkopf und -hals aus-einander. Im Falle eines Knochenbruchs werden in diesem Bereich Schrauben eingesetzt, um die individuellen Teile zusammenzuhalten und den Heilungs-prozess zu unterstützen. In einigen Fällen führt diese Methode aber zu einem Problem: Besonders bei älteren Patienten, deren Knochen weicher sind im Vergleich zu jüngeren, kommt es öfters vor, dass die Schrauben die Knochen aufgrund ihrer Beschaffenheit durchbohren und sie dadurch noch mehr beschädigen.

Unsere Aufgabe war nun, die Krümmung des besagten Knochenteils zu untersuchen. Wäre diese für alle Patienten etwa gleich, dann könnte man eine oder auch mehrere Unterstützungsschienen anfertigen lassen. Diese könnte man an der Unterseite des Knochens als Verstärkung anlegen, in der dann die Schrauben einen besseren Halt finden. Um dem Knochen genügend Halt zu bieten, muss sich diese Schiene dem Verlauf des Knochens möglichst gut anpassen.

Einen ausgeprägten Fokus legten wir auf den Aspekt der mathematischen Bildverarbeitung und auf die Frage, wie das Bild bearbeitet werden muss, um damit arbeiten zu können. Dann berechneten wir eine Spline-Funktion, die in der gewünschten Region die Form des Knochens bestmöglich approximiert.

Die Untersuchungen an vielen verschiedenen Personen ergaben, dass tat-sächlich die Form des Oberarms immer sehr ähnlich ist, sie hängt weder von der Größe, noch von Geschlecht oder Alter ab.

(5)

Acknowledgments

I would like to thank my supervisor Prof. Andreas Neubauer for the oppur-tunity to write this thesis, for the always open door, to discuss every part of this work in great detail.

I am especially grateful to Dr. Angelika Schwarz from the AUVA-Unfall-krankenhaus in Graz for her collaboration on the medical part of this work. Without her this thesis would have not been possible.

I also want to thank the whole Institute of Industrial Mathematics, espe-cially the people in the lunch-group and the ones who always had time to go for a run, when I was in dire need to leave the office.

Last but not least I want to express my deep gratitude to my family and all the people outside the office, who helped or supported me during my studies and who helped me stay grounded, especially in the last couple of weeks when finishing this thesis.

(6)

Contents

1 Introduction 1 2 Medical Background 3 2.1 DICOM . . . 3 2.2 Area of interest . . . 4 2.3 Patients . . . 7 3 Image Processing 8 3.1 Edges . . . 9 3.2 Filters . . . 10 3.2.1 Linear filters . . . 10 3.2.2 Non-linear filters . . . 14 3.3 Segmentation . . . 17 3.3.1 Thresholding . . . 18 3.3.2 Edge-based . . . 19 3.3.3 Region-based . . . 21 3.4 Mathematical Morphology . . . 22

3.5 Edge Detector Performance . . . 23

4 Application 26 4.1 Preparing the image . . . 26

4.2 Spline interpolation . . . 30 5 Results 33 5.1 Spline comparison . . . 33 5.2 Curvature comparison . . . 35 5.3 Interpretation . . . 39 Bibliography 39 List of Figures 41 iv

(7)

Chapter 1

Introduction

The problem we are dealing with in this thesis arose from our medical part-ners at the AUVA-Unfallkrankenhaus Steiermark in Graz. They are inter-ested in getting a broken humerus fixed. For the stabilization screws through the surgical neck of the bone are used. To increase the stability for the screws and prevent them from damaging the bone further which is necessary for the healing process a fixation plate on the lower side of the bone is needed. This plate should be close to the form of the bone.

Therefore, our task was to make a study of this region of such bones of healthy patients from CT scans to find out if the shape is similar for different patients or if it varies too much. We had to find out what the shape of this part of the bone depends on – size of the patient, gender, or other factors. If this study would show that there are only small differences of the shape, one could prepare some general plates in advance that can be used during a surgery without the need of adjusting the plate during the surgery.

We did all the programming in MATLAB, since one can easily read the DICOM-files, which are outputs from the scans – and are defined in greater detail in Section 2.1, in this language. Since one scan consists of around 70 pictures, we had to decide which scan would be best suitable for us. This decision was made by hand for every patient, because we had different levels of thickness of the humerus.

The general procedure for dealing with this pictures was then as follows: • Read the image.

• Create a black and white image.

• Use a thresholding algorithm and prepare the image for further pro-cessing.

(8)

CHAPTER 1. INTRODUCTION 2 • Choose the area of interest.

This process does not need to be automated, since we want to make a generalization about the humerus and only need to do this once. Therefore, we decided it would be best to be able to choose the area of interest by oneself. We interpolated the spline in this area and from the resulting B-spline we calculated the curvature at every point. This is needed so we could compare the form of the splines to each other, without having to specify the size or angle of the picture taken. After this was done, we gave our partners different deliverables so they could decide what they wanted to use to move the process of creating these plates along.

The mathematical procedures for manipulating the image are explained in Chapter 3.

(9)

Chapter 2

Medical Background

2.1

DICOM

All the information in this section is solely taken from the book [7].

DICOM stands for Digital Imaging and COmmunications in Medicine. It is the standard, and backbone, of modern image display. It is the most universal and fundamental standard in digital medical imaging. As such, it provides all the necessary features for the diagnostically accurate representation and processing of medical imaging data, but it is not just that – it is even more. Since it has been conceived decades ago, it has played an integral part in the evolution of digital medicine by providing the following advantages to doctors around the world:

• A universal standard of digital medicine. All current, digital-image-acquisition devices produce DICOM images and communicate through DICOM networks.

• Very high image quality – it supports up to 65536 shades of grey. • Full support for numerous image-acquisition parameters and different

data types. Not only does DICOM store the images, but it also records a multitude of other image-related parameters. Such as the patients 3D-position, physical sizes of objects in the image, slice thickness, image exposure parameter, and so on.

• Complete encoding of medical data.

• Clarity in describing digital imaging devices and their functionality – the backbone of any medical imaging project.

(10)

CHAPTER 2. MEDICAL BACKGROUND 4 The most important information of these DICOM files were for us the so called Hounsfield numbers. They are described on the radiopaedia website [8] in the following way:

The standard unit used in computer tomography, to express CT numbers in a standardized and convinient form, is a dimensionless one, called hounsfield unit (HU). They come from a linear transformation of a coefficient that measures how easy an x-ray can pass through any given material. This transformation is defined in the following way:

HU = µx− µwater µwater − µair

× 1000 .

The radio-densities of air and water have been arbitrarily assigned when it was first introduced. These values are 0 HU for distilled water at zero degree Celsius, and −1000 HU for air at pressure 105 Pascal. Therefore, we have a

range of units ranging from −1000 to +2000.

2.2

Area of interest

The following part is a contribution from our project partners at the AUVA in Graz:

“The shoulder joint is the most flexible joint of the human body and essen-tial for the high range of action of the upper limb. In healthy situations, this high range of motion requires muscular stabilization, which is ensured by the rotator cuff. In fracture situations, at postoperative day one a stable fixation is an immanent factor for a good outcome. Up-to-date trauma regime aims functional postoperative treatment, which requires a stable osteosynthesis.

Complex fractures of the proximal humerus, especially in elderly female pa-tients, represent the third most common fracture in humans with increasing rates. Corresponding registry data a threefold increase in the incidence since 1970 has been reported. Cause of demographic changes and the continuing population aging, additional increasing rates and socio-economic loads must be expected. The treatment of this fracture, therefore, represents not only a medical challenge with patient’s comorbidities and osteoporosis, but also an immanent socioeconomically factor (see Figure 2.1).

The spectrum of treatment options includes three main columns: • conservative care,

• osteosynthesis techniques with the goal of joint preservation, • and joint replacement techniques.

(11)

CHAPTER 2. MEDICAL BACKGROUND 5

(a) 66 years old patient, an un-stable and complex fracture situ-ation in 3d-CT scan

(b) 75 years old patient, X-ray of the most common osteosynthesis tech-nique

(c) 82 years old patient, X-ray of a primary joint replacement, reverse shoulder arthroplasty

Figure 2.1: Three scans of different patients

In the last decade, proximal humeral fractures are more and more fre-quently treated with reverse total shoulder arthroplasty, especially in elderly patients as joint replacing technique. This therapy option comprises primary as well as secondary fracture treatment as salvage procedure; good clinical results are reported. One benefit of joint replacement is the elimination of the risk of avascular humeral head necrosis compared to the osteosynthesis. Further, the outcome becomes more predictable, but this method offers few opportunities for withdrawals and should be well considered in each patient. However, the optimal treatment of complex fractures of the proximal hume-rus is still debated in the literature. Several options are reported, no single technique has proven its superiority: plate fixation, minimally invasive

(12)

tech-CHAPTER 2. MEDICAL BACKGROUND 6 nique, suture fixation technique, intramedullary nailing, hemi-arthroplasty and reversed shoulder arthroplasty.

Due to the high complication and revision rates after osteosynthesis, an increasing interest in conservative therapy is meanwhile to observe, good results are descripted. A range in functional outcome and bias regarding the patient collective has to be reported, because stable fractures can mainly be treated successfully conservatively, in displaced and unstable fracture types, surgery should be the first option. It is assumed due of the rising patient’s demand, that the number of operative therapies will increase in next decades. In the literature benefits with improved implant-bone interface are well descripted. These data were evaluated in the most common fracture types in elderly people like proximal femoral and humeral fractures and represent a representative patient collective. In proximal humeral region several op-tions, like cement augmentation or fibular strut allograft as well as other implant innovations or changes showed that new strategies could improve the biomechanical results and stability (see Figure 2.2).

(a) CT scan of a 65 year old pa-tient, allograft in an unstable frac-ture situation to improve stability in osteosynthesis

(b) 80 years old patient, CT scan of a ce-ment augce-mentation in an angular stable osteosynthesis

Figure 2.2: Two scans of different patients

Population aging and loss of the bone mineral density has a direct cor-relation with the incidence and severity of fractures, it is obvious that new designs have to be developed in following years and better designs should be

(13)

CHAPTER 2. MEDICAL BACKGROUND 7 the goal.

Cause of all descripted facts, the need for new strategies should be aimed. Therefore, anatomical structures must be interpreted to create new evidences and options in this frequent and important field of surgery. The need of exact knowledge of anatomical characteristics may also provide the possibility of patient specific implants, which could secure optimal stability and postoper-ative treatment.

Thus, it is well descripted, that one of the key fragments for a stable osteosynthesis is the medial calcar region of the proximal humerus. Conse-quently, the aim of this study are specific anatomical characteristics of this region for the opportunity of innovations. As hypothesis was defined, that this region has a similar aspect in different patients and results can be shown in patient’s CT scans.”

2.3

Patients

The data we got from our partners at the AUVA-Unfallkrankenhaus, consists of 30 patients. Each one with a different diameter of the head of the bone. They have been grouped into two different groups depending on the diameter

d: one with d ≥ 47 mm and one with d < 47 mm. The doctors, who we

worked with, made that decision. We had 15 men and 15 women in our group – with men typically having larger diameters than women. But we also had two women in the group with larger upper arm bones. We also had a healthy mix of pictures from the right and left side of the body – with 16 from the right side and 14 from the left side. The age-range of our patients was between 24 and 78 years, with the male and female group, individually, having a similar range. The last important thing for us was the actual size of the scan. It was the reason for a lot of discussion between the project partners. The images ranged from 107 mm × 90 mm to 247 mm × 226 mm.

(14)

Chapter 3

Image Processing

This chapter is more or less solely based on the book [4], anything from another source will be cited additionally.

Before we can do any work on an image, we first need to define what it is in a mathematical sense.

Figure 3.1: Representation of a digital image

(15)

CHAPTER 3. IMAGE PROCESSING 9 As you can see in Figure 3.1 every pixel from the image corresponds with an entry in the n × n matrix at the same location. Meaning pixel (i, j) is the

fi,j entry, beginning with (1, 1) in the upper left corner and ending in the right lower corner.

3.1

Edges

The first step in the edge detection process, is to define what an edge pixel and an edge actually is. Edges are significant changes in the value of the pixels in an image, and are important tools for analyzing them. They typically occur on the boundary between two different regions in an image. The process of detecting them is frequently an early step in recovering information from the image. Due to its importance, edge detection continues to be an active research area.

An edge is a noteworthy change in the image intensity in a small area, it is usually associated with a discontinuity in either, as already stated, the intensity or the first derivative of it. It can be one of two things. Either a step discontinuity, where the intensity abruptly changes from one value to another, or a line discontinuity, where the image intensity changes, again, abruptly but then returns to the starting value within some short distance. These types of edges, step and line edges, are rare in real images, because of the smoothing introduced by most devices. Sharp discontinuities are sparse in real images. Therefore, we have to introduce two further types of edges. The ramp and roof edges, the first is derived from the step edge, the latter from the line edge. Here the changes do not occur at one point in time, but rather over a finite period. One can see such types in Figure 3.2.

It is also possible for an edge to have more than one characteristic, for example a combination of step and line. The boundary of an object, for ex-ample, is a significant feature in the image and usually produces step edges because the image intensity of the object is different from the image intensity of the background. A perfect edge detection algorithm would not even rec-ognize noise as an egde, even though the changes of intensity might be local, but they are not significant. But in real world application we do not have an operator which is immune to noise, i.e., to some random values added or subtracted to the intensity of the pixel. The term edge is usually used for edge points or edge fragments. The first one are defined as points in an image with coordinates (i, j) at the location of a significant local change in intensity. The latter corresponds to the i and j coordinates of an edge, and its orientation θ, which may be the gradient angle. The set produced by an edge detector can be split into two groups: correct edges, which correspond

(16)

CHAPTER 3. IMAGE PROCESSING 10

Figure 3.2: Edge profiles

to edges in the scene, and false edges, which do not have the same property. One might define a third group, as edges that are in the image but have not been detected. This and additional information can be found in the book [3]

3.2

Filters

Have you ever wondered why most images do not seem to be that clear after taking it? The answer is because we have noise that adds to our data around us most of the time. When thinking about filters, one might think about them in the usual sense – like a filter in old coffee machines, after brewing it they leave behind the bitter coffee grounds and one can enjoy what they have let through. But this are not the only filters we have, there are also others that emphasize specific parts of an image or blur it out so we do not have to deal with rough jumps of colours in the picture, like linear and non-linear filters which will now be discussed in further detail.

3.2.1

Linear filters

The first and simplest idea for a filter is the so called box filter. It takes the average of pixel values in a square around the targeted pixel and then replaces its value with it. More or less all linear filters work in that way except a lot of them do not use the average value but a weighted average. If we take g as the output from our filter and w as weights we get the following

(17)

CHAPTER 3. IMAGE PROCESSING 11 equation. gi,j = m X k=−m m X l=−m wklfi+k,j+l

for (i, j) = (m + 1), . . . , (n − m), where f is the pixel value as seen in Figure 3.1. One downside is that one needs to deal with pixels at the border in a separate way, since we do not have a full square around these pixels. But we will not go further into details in this thesis, since it is not relevant for our calculations.

Another big disadvantage is that those filtering techniques have serious limitations in dealing with signals that have been created or processed by a system exhibiting some degree of non-linearity. In image processing many of these characteristics are often present, this is why it is no surprise that this is the field where non-linear filtering techniques have shown superiority. Additional information about this can be found in the paper [6].

Smoothing

Gaussian filters overcome the drawback of the box filter which is that the weights there have an abrupt cut-off, but the weights here decay to zero. They are specified by the probability density function

wkl= 1 2πσ2e

−(k2+l2) 2σ2

for k, l = −[3σ], . . . , [3σ], and some positive σ, where [3σ] is the so called floor function. In Figure 3.3 one can see the Gaussian filter applied to a CT scan of a shoulder with different σ values to showcase the smoothing.

Edge-detection

Several filters can be used to emphasize edges in images. One way to approach this is via first-derivative filters. The weights for a first-derivative row filter are defined in the following way:

w = 1 6   −1 0 1 −1 0 1 −1 0 1  

If one looks at the Taylor expansion of f close to the point (i, j) we can show that these weights are an estimate of the first derivative. Since

fi+k,j+l ≈ fi,j + k ∂fi,j ∂y + l ∂fi,j ∂x + k2 2 2f i,j ∂y2 + l2 2 2f i,j ∂x2 + kl 2f i,j ∂x∂y,

(18)

CHAPTER 3. IMAGE PROCESSING 12

(a) σ = 0 (b) σ = 0.5

(c) σ = 2

Figure 3.3: Pictures of a Gaussian filter applied to an image with different σ

we may rewrite 1 X k=−1 1 X l=−1

wklfi+k,j+l in an approximate way as: −1 6  fi,j∂fi,j ∂y∂fi,j ∂x + 1 2 2f i,j ∂y2 + 1 2 2f i,j ∂x2 + 2f i,j ∂x∂y  +1 6  fi,j∂fi,j ∂y + ∂fi,j ∂x + 1 2 2fi,j ∂y2 + 1 2 2fi,j ∂x2 − 2fi,j ∂x∂y  −1 6  fi,j+ 0 − ∂fi,j ∂x + 0 + 1 2 2f i,j ∂x2 + 0  +1 6  fi,j+ 0 + ∂fi,j ∂x + 0 + 1 2 2f i,j ∂x2 + 0  −1 6  fi,j + ∂fi,j ∂y∂fi,j ∂x + 1 2 2fi,j ∂y2 + 1 2 2fi,j ∂x2 − 2fi,j ∂x∂y  +1 6  fi,j+ ∂fi,j ∂y + ∂fi,j ∂x + 1 2 2f i,j ∂y2 + 1 2 2f i,j ∂x2 + 2f i,j ∂x∂y 

(19)

CHAPTER 3. IMAGE PROCESSING 13 Now most of it cancels out and we are left with

1 X k=−1 1 X l=−1 wklfi+k,j+l∂fij ∂x .

The first-derivative column filter looks like this:

w = 1 6   −1 −1 −1 0 0 0 1 1 1  

These are good filters to show edges, but they also have the following disad-vantages:

• This method is used on the noise of an image as well. A way around this is to apply these types of filters to an image which has already been through a smoothing filter, like the Gaussian we discussed above. • Each of the two first-derivative filters responds only to edges in a spe-cific direction. To get rid of this there is the second-derivative filter with weights: w = 1 3   1 1 1 1 −8 1 1 1 1   One can show the following result

1 X k=−1 1 X l=−1 wklfi+k,j+l2f ij ∂x2 + 2f ij ∂y2 ,

As one can see this approximates the Laplacian of f . Unfortunately, we run into the same problem as with the previous first-derivative filters – not only are the edges emphasized, the noise is as well. To reduce this phenomenon we can use the simpler version of the weight matrix

w =   0 1 0 1 −4 1 0 1 0  

after applying the Gaussian filter, thus resulting in the so-called Lapla-cian-of-Gaussian filter.

(20)

CHAPTER 3. IMAGE PROCESSING 14

3.2.2

Non-linear filters

Since linear filters emphasize the edge and the noise in the image, we look at filters which reduce noise and preserve the edges in this section. This result comes again with a price, there are tons of filters to choose from, which can be computationally expensive, create some new features and distort parts of the image. The moving median filter is the one we take a look at next, because it is the most widely used one.

Histogram-based filters

The moving median filter is similar to the moving average filter we discussed earlier, the difference is now that instead of the mean around a pixel we use the median of the histogram of the image.

gi,j = mfi+k,j+l

where mf∗ is the median of

{fi+k,j+l : k, l = −m, . . . , m} ,

for i, j = (m + 1), . . . , (n − m). The median is the value, which splits a set of numbers in halfs. The histogram shows several bins, in which the pixel values of the image are put into, depending on the frequency of appearance. One thing of importance is that nonlinear filters are not additive. Meaning an application of one such filter with a larger window is not the same as a repeated application with a smaller window size.

Spatially-adaptive filters

Median filters can not only be improved using local histograms but also the spatial distribution of pixel values. Here one just makes a distinction between the center pixel and the rest of the window. One of those is the k-neighbours filter, where in general the k nearest pixels to the center point are averaged. Another option is the minimum variance filter. In this case, the mean ( ¯f )

and the variance (S) are evaluated around in subdivisions within a window, and the output is defined to be the mean of that subwindow which has the smallest variance.

Edge-detection filters

The range filter, which is a simple version of filters from this class, produces as output the difference between the maximum and minimum values

(21)

CHAPTER 3. IMAGE PROCESSING 15 Roberts’ filter is defined in the following way

gij = |fij − fi+1,j+1| + |fi+1,j − fi,j+1| .

In other words, the output is defined as the absolute values of the differences of diagonal opposite pixels.

Gradient filters

At a point (i, j) the maximum gradient is given by s  ∂fij ∂x 2 + ∂fij ∂y 2 .

A filter with an estimate instead of the maximum gradient is Prewitt’s filter. Its estimates are

d

∂fij

∂x =

1

6(fi−1,j+1+ fi,j+1+ fi+1,j+1− fi−1,j−1− fi,j−1− fi+1,j−1) d

∂fij

∂y =

1

6(fi+1,j−1+ fi+1,j + fi+1,j+1− fi−1,j−1− fi−1,j − fi−1,j+1) Very similar to that is Sobel’s Filter, which defines the estimates as follows

d

∂fij

∂x =

1

8(fi−1,j+1+ 2fi,j+1+ fi+1,j+1− fi−1,j−1− 2fi,j−1− fi+1,j−1) d

∂fij

∂y =

1

8(fi+1,j−1+ 2fi+1,j + fi+1,j+1− fi−1,j−1− 2fi−1,j − fi−1,j+1) One can clearly see that the latter one gives more weight to the pixels closest to (i, j).

One can see the application of the previous discussed filters in Figure 3.4. The last one, we want to discuss is the one actually used, because it is harder for it to be fooled by noise, and it is called Canny’s filter. In 1986 John Canny proposed, in response to overcoming the problem with broadening in extent as smoothing increases, that the edges could be determined through zero-crossings of second-derivatives in direction of the steepest gradient. It is implemented in the following way. It finds edges by looking for local maxima of the gradient of the input image. The edge function calculates the gradient using the derivative of a Gaussian filter. This method uses two thresholds to detect strong and weak edges, including weak edges in the output if they are connected to strong edges. This double threshold is determined by a

(22)

CHAPTER 3. IMAGE PROCESSING 16

(a) Original (b) Roberts’ filter

(c) Prewitt’s filter (d) Sobel’s filter

Figure 3.4: Pictures of different filters applied to the original image

high and low threshold value. Strong edge pixels are marked as such, if their gradient value is higher than the high threshold. If this value is between the high and the low value, then it is marked as weak edge pixel. How are these values determined, one might ask. They are empirically determined and depend on the content of a given input. The last step is to check if the weak edges are connected to strong edges or not. If they are, we keep them in the picture, if not we throw them out. The reason behind this is that most likely weak edges which are not connected are created from noise, in our case, or colour variation in the image. This is why by using two thresholds, the Canny method is less likely than the other methods to be fooled by noise, and more likely to detect true weak edges (see Figure 3.5). The original work for this filter is from the book [2].

(23)

CHAPTER 3. IMAGE PROCESSING 17

Figure 3.5: Canny’s filter applied

3.3

Segmentation

Image segmentation is the division of an image into regions or categories, which correspond to different objects or parts of objects. Each and every pixel in an image is put into one of a number of these categories. A good one is usually one which has the following properties:

• Pixels in the same category have similar values and form a connected region.

• Neighbouring pixels in different categories have dissimilar values. This process is often a critical step in image analysis, since at this point we move from considering each pixel as a unit, to consider groups of pixels, called objects, in the image as a unit.

There are three general approaches to segmentation • Thresholding

(24)

CHAPTER 3. IMAGE PROCESSING 18 • Region-based.

3.3.1

Thresholding

The most commonly used method to segment an image is just to do this based on the values of the pixels. We either include them in our desired group or cut them out off any further processing. We can write this in the following way. Let t be the threshold and check if

fij ≤ t.

This way we split our image into two categories. Everything in category 1 has its value below or equal to the threshold and category 2 for everything else. We can also choose more than one threshold if desired, and thus one would get more than just two categories. One can see in Figure 3.6 how this method looks applied to our problem.

(a) Original image (b) Threshold image

Figure 3.6: Threshold applied to the original scan

Histogram-based thresholding

The histogram of an image was already discussed, a bit, when the topic of filters came up. This time one can use this for choosing a threshold. Before we talk about the algorithm to determine the threshold we need some notations: let hk, k = 0, . . . , N denote the number of pixels in an image with value k and N is the maximum of the pixel values.

(25)

CHAPTER 3. IMAGE PROCESSING 19 2. Calculate the mean pixel value in each category. The first one is for

values less than or equal to t, the other for bigger ones.

µ1 = t X k=1 khk/ t X k=1 hk µ2 = N X k=t+1 khk/ N X k=t+1 hk 3. Update t in the following way

t = µ1+ µ2

2 

,

where we have the usual floor function.

4. Repeat the second and third step until t does not change its value anymore.

This is the so called intermeans algorithm, which divides the histogram in two equal parts concerning the number of pixels in every part.

3.3.2

Edge-based

The second way to segment an image is, to do it based on the edges in the picture. There are some algorithms available for semi-automatically inserting edges into an image. After this has been done those edges, connected chain of pixels, divide the image into regions. Differentiation of areas can be achieved by grouping pixels together that are not separated by one of those pixels. The connected components algorithm, proposed by Pfaltz in 1966, is one that does this without the need of a supervisor and it works in the following way:

• Initialize the count of the number of categories K = 0.

• Go through each pixel (i, j), row by row and for each value of i taking

j = 1, . . . , n.

• Now one of four things happens:

(26)

CHAPTER 3. IMAGE PROCESSING 20

– If both previously visited neighbours (i − 1, j), (i, j − 1) are edge

pixels, we update in the following way

K = K + 1, hK = K, gij = K,

where hK keeps track of which categories are equivalent, and gij records the label for the pixel we are looking at.

– If just one of the two neighbours is an edge, then our pixel is

grouped with the other the non-edge pixel.

– If neither is an edge then it is put into the same group as one of

them.

gij = gi−1,j,

and if the neighbours have labels which have not been marked as equivalent, i.e., hgi−1,j 6= hgi,j−1, then this needs to be done,

because they are connected at pixel (i, j).

• After we got through every pixel then categories that have been marked get grouped together.

But sometimes it is more preferable to get the borders of the region and work with that than the region itself. This is why it is also important to talk about identifying contours.

Identifying contours

The input for this to work is a labeled image. By labeled we mean that every object in the image has got a different value assigned to it. Compare Figure 3.7, where one can see one object labeled 1 and the other 2.

How could an algorithm work to identify contours? It would consist of two parts.

• Part one is to go through the image from left to right or top to bottom. More than one row/column at a time can be considered at the same time. As soon as it finds the first pixel of an object, the second part starts.

• It continues as it normally would, but checks if the following pixel is a boundary point or not. If it deems it to be one, then this point is added into the list of edge-pixels. As a result we get an output as shown in Figure 3.8

.

The reader can find the images and more detailed explanation in the book [10].

(27)

CHAPTER 3. IMAGE PROCESSING 21

Figure 3.7: A labeled image with two regions

Figure 3.8: The output of the described method

3.3.3

Region-based

In this subsection we want to talk about region-based or spatial clustering methods. In this application pixels of similar values are combined into a group or if not then split apart. Therefore, we can easily deduce that it would make more sense to talk about either merging, splitting or split-and-merging methods. One fully automated algorithm is the watershed algorithm, for which we want to give the reader a very short description. The concept is to take a greyscale image and look at its gradients. The next step is to find the minima of those and then, with a chosen threshold, the image is segmented. The split-and-merge algorithm proposed by Horowitz and Pavildis in 1976 consists of two stages:

• The variance of the whole image is calculated. If it exceeds a certain limit, the image is split into quadrants. This procedure is repeated with every quadrant over and over again until the whole image consists of squares of different sizes with variances below the limit.

• The second stage consists of combining neighboring squares and check-ing if the variance of the new region is still below the prescribed limit.

(28)

CHAPTER 3. IMAGE PROCESSING 22 This step is repeated until every region is as big as possible without exceeding the limit.

Although one always gets a unique solution for the splitting part, this is no longer true for the merging stage: here the final result depends on the order in which the list of squares is worked off.

3.4

Mathematical Morphology

In this section we consider our image to be a binary one – either black for 0 or white for 1. The basic idea is to approach image analysis based on set theory and not arithmetic like the other chapters in this thesis. There are a couple of set operations (erosion, dilation, opening and closing) for which we want to give the reader a short overview.

We start with the most basic one called

Erosion

Suppose A is a binary image, and let S be the structuring element – which is a set with a specified form and a reference pixel so we know the position of the whole element. If the latter one is placed at (i, j) we refer to it as S(i,j).

Then the erosion of A with S is defined as the set of every pixel of A that fully contains S. In a mathematical form

A S = {(i, j) : S(i,j)⊂ A}

The complementary operation to this operation is

Dilation

It is simply defined as the complement of an eroded set

A ⊕ S = (Ac S)c

Before we can talk about further operations, we introduce S0 as a 180◦ rotation of S at (i, j). If S is symmetric around the reference pixel, then obviously S = S0 (see Figure 3.9).

(29)

CHAPTER 3. IMAGE PROCESSING 23

Figure 3.9: Two easy examples taken from [9]

Opening and Closing

Operations which are also widely used, and which were used in this thesis are the opening and closing operations, often denoted by ψS(A) and φS(A)

ψS(A) = (A S) ⊕ S0

φS(A) = (A ⊕ S) S0

In other words opening, the image is using an erosion followed by a dilation, and a closing is the other way around. Applying one of them multiple times does not produce further effects, we call such operations idempotent. In Figure 3.10 one can see an example of such operations applied to a picture.

3.5

Edge Detector Performance

Criteria to consider in evaluating performance of edge detectors include the following:

(30)

CHAPTER 3. IMAGE PROCESSING 24

(a) Original image as black and white image

(b) Closing operation applied (c) Applying erosion operator

Figure 3.10: Morphological operations on our scan

• Probability of missing edges • Error in estimation of edge angle

• Mean square distance of the edge estimate from a true edge

• Tolerance to distorted edges and other features such as corners or junc-tions

The first two points concern the performance of an algorithm as detector, the second two concern the performance of the program as an estimator of edge location and orientation. The last one concerns the tolerance of the edge algorithm to edges that depart from the ideal model.

The performance can be evaluated in two stages: 1. Count the number of missing or false edges.

(31)

CHAPTER 3. IMAGE PROCESSING 25 2. Measure the error distribution for the estimated location and

orienta-tion.

How does one count, for example, the missing edges in an image? The simplest way is to have an image, where all the edges are known and then count the difference between the output of the edge detector and the original image. The results vary with the threshold, interaction between edges and other features. If one applies the edge detector to an image without any additional noise, no smoothing, no interaction between edges, then one should get a perfect result – meaning the whole set of edges that appear in the original image. This set can be used as standard set for comparison. Now, if we have the result of an image with all the previously excluded things now included, we need to create a one-to-one mapping to the standard set. Edges too far from true edges are labeled as false ones; the ones that pair closely with one from the set are correct. After this procedure, edges that have not been paired are missing ones.

This edge detector procedure is only based on its ability to indicate the presence or absence of edges. But it does say nothing about the accuracy of the location or orientation of edges. This comparison requires the model of the test case to be available. The edge locations and orientations must be compared with a mathematical description of the model of these things. How far away is the edge from the true location (x, y) of the edge? The same question can be asked for the orientation. One estimates the error distribution from a histogram of location errors. The orientation error of an edge is measured by comparing the orientation of the edge fragment with the angle of the normal to the curve that models the contour, evaluated at the closest point to the edge point. This whole section and subsection have mostly been taken from [3].

Now that we have discussed all the medical and mathematical basics we can discuss the approach to our actual problem in the following chapter.

(32)

Chapter 4

Application

Most of the things written in this and the following part of the thesis come from the mathworks website [5].

4.1

Preparing the image

The first thing we need to do before we apply any morphological operations or filters, is to threshold the image.

MATLAB Code: Thresholding the image [m, n ] = s i z e (Y ) ; S = o n e s (m, n ) ; Bone = [ 3 0 0 , 1 3 0 0 ] ; f o r i =1:m f o r j =1:n P=Y( i , j ) ;

i f (P>Bone ( 1 , 1 ) && P<Bone ( 1 , 2 ) )

S ( i , j ) = 0 ;

end end end

We need to introduce an interval for the hounsfield numbers (see Chap-ter 2 for details) corresponding to bones. Then we go through every row and column and check if the value at this pixel is in the desired interval. Correspondingly, we set the value of our black and white image – from this point on we also use the abbreviation BWI, here called S, to zero. So we are basically creating a binary image from our CT scan (see Figure 4.1).

(33)

CHAPTER 4. APPLICATION 27

Figure 4.1: Threshold image of a scan

As one can clearly see in the picture, there is still lots of noise in there. The next step is to let the image run through an algorithm, which clears it of all unnecessary white spots.

Following Figure 4.2 it might be best to explain in words what exactly the algorithm does:

• We take as an input our BWI and an integer.

• Go through every pixel and check if it is connected to other pixels, if yes then group them together as an object.

• Use the integer input to compare its value and the number of pixels in any given object.

• If the first number is bigger than the size of the object, we are getting rid of it.

The next step is to use the morphological close operation on the output of the algorithm. We have already discussed this in Section 3.4, what we did not discuss are the details – for example the structure of our reference element and the size of it. There are different structuring elements to choose from, but the choice for us was an easy one, since not every element was

(34)

CHAPTER 4. APPLICATION 28

Figure 4.2: After the image ran through the cleaning up algorithm

suitable for our problem – one might ask why, and the answer is in Figure 4.3.

In Figure 4.3 (a) we see, after the application of our operations, that the head of the upper arm would be connected to the shoulder. Since we need to have a clear image of the this part, the disk is one of those elements that is not suitable for this kind of problem, the reason being that the to main parts of the image – shoulder and upper arm are too close together. If we were to apply an edge detection algorithm we would get inner and outer edges for the bones that have holes in them. Therefore, we decided to include a piece of code that fills up all the holes within an already existing object, as one can see in Figure 4.4.

As we already discussed earlier, the closing operation is a idempotent one, meaning we do not get any further progress by applying it again. One thing where we do indeed get better results by multiple applications of the same operation is, when we erode parts of the image to make it smoother for further treatment. Again similar to the closing operation, we have the choice of different structural elements and their size. The bigger we choose our structural element to be, the smoother our final result will be, but as a trade off we cut off more of the object, which is not really suitable for our task at

(35)

CHAPTER 4. APPLICATION 29

(a) Disk as a structuring element (b) Square as a structuring element

Figure 4.3: Comparison of a different structural elements

hand.

In Figure 4.5 one can see the comparison between erosion with a diamond at size 6 and erosion with a diamond at size 1 applied twice to the same image.

We use Canny’s filter in our algorithm, since it delivers the best results compared to all the others. It has been discussed in Section 3.2.2.

(36)

CHAPTER 4. APPLICATION 30

(a) Structural element with size 6 (b) Structural element with size 1

Figure 4.5: Comparison of different sizes of the structural element

4.2

Spline interpolation

For many years, long, thin strips of wood or some other material have been used by craftsmen to create a smooth curve between specific points, or knots. These strips or splines were, sometimes still are, anchored in place by attach-ing weights at points along the spline. By varyattach-ing the points where these are attached to the spline itself and the position of both the spline and the weight relative to the surface, the curve can be made to pass through the specified points provided a sufficient number of weights are used. The mathe-matical spline is the result of replacing the wooden spline by its elastica and then approximating the latter by a piecewise cubic, usually different form between each pair of adjacent weights, with certain discontinuities of deriva-tives permitted at the junctions where two cubics join. In its simple form, the mathematical spline is C2 continuous. However, there is usually a jump discontinuity in its third derivative at the points of the weights.

The cubic spline, which we use here, proves to be an effective tool in pro-cesses of interpolation and approximation. It also posses strong convergence properties.

So to reiterate, the basic idea is to divide our interval in smaller sub-intervals. Then we create piecewise polynomial functions on each of those with the following restrictions. Let S be an interpolant of f . On each sub-interval the following two equations have to hold

s0i(xi) = s0i−1(xi) ,

(37)

CHAPTER 4. APPLICATION 31 This ensures that we have a C2-smooth function (see [1] for details).

We decided together with the partners of this project, that it would be more suitable for our case to choose the area of interest by ourselves for every patient, since we do not have it always in the same position for different images. Figure 4.6 shows one instance of this case.

MATLAB Code: Spline interpolation [ B, L]= b w b o u n d a r i e s ( S , ’ n o h o l e s ’ ) ; f o r k =1: l e n g t h (B) boundary=B{ k } ; end t =4; f o r k =1: l e n g t h ( boundary ) i f boundary ( k , 1 ) ! = 1 t=t +1; e l s e b r e a k ; end end sb=boundary ( t : ( end / 2 ) , : ) ; x = z e r o s ( s i z e ( sb ) ) ; f o r i =1: l e n g t h ( x ) x ( i )= i ; end y = [ sb ( 1 : end , 1 ) , sb ( 1 : end , 2 ) ] ; c s=s p a p s ( x ( : , 1 ) , t r a n s p o s e ( y ) , 5 0 ) ;

In the first line of this code segment, one can see that we use an already existing MATLAB function, which returns the boundary B. The next step is to determine the length of the boundary and save it in a new list. Since our object has an inner and an outer part, we have to decide which one we want to take. And this is why there is the second for-loop, in which we determine the starting point of our spline. Applying similar logic we only go half-way along the whole boundary to determine the end point of the spline. The next thing we need is, to split the x-axis into n different parts, where n is the length of our boundary. This is needed to interpolate our spline at those points.

The most important function we use here is the one in the last line. It returns the B-form of the smoothest function f that lies within the given

(38)

CHAPTER 4. APPLICATION 32

Figure 4.6: Choosing the area of interest by hand

tolerance. The distance function between f and the input-data is given by

E(f ) =

n X

j=1

w(j)|yj − −f (xj)|2, further smoothest means that the integral

F (D2f ) =

Z max(x) min(x)

|D2f (t)|2dt

is minized, where D2f denotes the second derivative of f. When the third

input argument, called tol, of the function is non-negative, then the spline is determined as the unique minimizer of

ρE(f ) + F (D2f ),

(39)

Chapter 5

Results

5.1

Spline comparison

The first problem we encountered was that of having too much noise in the picture. At first we need to explain, how noise looks like in such an image. We have a matrix full of numbers on the hounsfield unit scale. Meaning every pixel is described by its corresponding density number of the material in the scanned person. Additional noise is represented, in mathematical form, as random values added or subtracted to some entries in our matrix. This creates two problems. The first one is that we are not able to distinguish clearly between bones and other materials. The other one is that sometimes we do not recognize bones as such, because of the noise level.

As one can clearly see in Figure 5.1, the program could not detect all the bone-elements correctly, which in return creates problems when using edge detection or morphing algorithms. Thus, we are not able to apply the next couple of steps to get a continuous spline from the head to the surgical neck. These types of data were the first ones that we discarded.

Since patients were scanned laying down on a table, there are different positions that pictures of the upper arm have been taken. These tilted scans result in tilted splines, which makes them very hard to compare (see Figure 5.2. One would have to rotate the splines back into the same position for all of the different patients, so we could put them on top of each other. But this was too inconvenient to do, since we have lots of pictures and lots of different angles, for which it would have not been an easy task to determine the degree of rotation for every image.

The next thing we want to address is the problem with different sizes of the pictures (see Figure 5.3). Scans of different proportions are often transferred to mathematical images of similar size. Meaning if we have a

(40)

CHAPTER 5. RESULTS 34

Figure 5.1: Our algorithm applied to a picture with too much noise

picture with 247mm × 226mm and one with 107mm × 90mm, both result in similar matrices with dimensions very close to each other and not a matrix that is over two times bigger than the second one as the dimensions of the picture suggest. This in return results in a more detailed picture the smaller the size of the scan is, but this in return is not what we want since it amplifies all the features that we try to get rid off. The curvature of the bone is also subject to this amplification. If we were to compare splines resulting from different sized images we would need to scale them back to a common size, which is not an easy task, since the actual size of the arm is not known.

Interpolation vs approximation

MATLAB Code: Spline approximation

c s a = spap2 ( 1 , 4 , x ( : , 1 ) , t r a n s p o s e ( y ) ) ;

This returns the B-form of the spline f of order four with knot sequence chosen by the program, since the first argument of the function is 1, for which

n X

j=1

(41)

CHAPTER 5. RESULTS 35

(a) Tilted scan (b) Non-tilted scan

Figure 5.2: Comparison of a tilted/non-tilted picture

is minimized with default weights equal to one. If the vector x satisfies the Schoeneberg-Whitney conditions, i.e.,

knots(j) < xj < knots(j + k) , for j = 1, . . . , length(x) ,

then there is a unique spline satisfying yj = f (xj). If those conditions are not satisfied, then no spline is returned.

In Figure 5.4 one can see the difference between the approximation, with the knot sequence chosen by the algorithm, and the interpolation approach. The problem with this approach is that we cut corners in the area where it is really important not to lose any information. The other approach we tried is to add additional knots, specifically in the area of interest – meaning the area between the head and the surgical neck of the upper arm.

This results in other difficulties. If we add too much knots, then the spline oscillates a lot around the mean value of the spline, which has a bad influence on calculating the curvature of the spline – the final approach on how to solve this problem (see Figure 5.5).

5.2

Curvature comparison

What do we gain, when we compare the curvature instead of the splines themselves? One does not need to care about the angle at which the picture was taken, since the curvature is always a relative measure. We also found that the size of the image does not matter that much when comparing results.

(42)

CHAPTER 5. RESULTS 36

(a) Small section (b) Large section

Figure 5.3: Comparison of a small section and a large section

To calculate the curvature κ we used the following formula

κ = x

00y0− −x0y00

(x02+ y02)32 ,

with x0 and x00being the first and second derivative, respectively. We compute these with the following MATLAB functions:

MATLAB Code: Calculating the first and second derivative dF = f n d e r ( cs , 1 ) ;

ddF = f n d e r ( cs , 2 ) ;

Here the second argument is the order of derivative we want to calculate and the first one is our B-spline.

(43)

CHAPTER 5. RESULTS 37

Figure 5.4: Comparison between interpolation (blue) and approximation (red)

(44)

CHAPTER 5. RESULTS 38 MATLAB Code: Calculating the curvature

k=z e r o s ( l e n g t h ( c s . c o e f s ) ) ; f o r i =1: l e n g t h ( k)−2 k ( i ) = ( dF . c o e f s ( 1 , i ) ∗ ddF . c o e f s ( 2 , i )+ ddF . c o e f s ( 1 , i ) ∗ dF . c o e f s ( 2 , i ) ) / ( dF . c o e f s ( 1 , i )^2+dF . c o e f s ( 2 , i ) ^ 2 ) ^ ( 3 / 2 ) ; end

The next step is to calculate the curvature at each point xj.

Figure 5.6: Curvature from a patient with a small (x) and big bone (o) As we can see in Figure 5.6 the curvatures of a patient with a large diameter of the head of the upper arm and the one with a smaller one seem to behave very similar. The wiggles in between the maximum and minimum, and at the tail comes from the resolution of the image, since we can not smooth out every jump in pixels that we see.

In Figure 5.7 one can see another comparison, this time between similar sized bones. One can see a small offset in the x-direction, this is because we are choosing the area of interest by hand.

(45)

CHAPTER 5. RESULTS 39

Figure 5.7: Curvature from a patient with similar sized bones

5.3

Interpretation

After everything was done, the data were sent back to our partners at the hospital. We sent them two sets of data, the first one was the interpolated spline and the second one was the plot of the curvature. At first they tried to work with the latter, but then ultimately decided, it was easier for them to compare the curves with real life images. But both ways, comparing the curvature of each image and comparing the spline to actual images, came to the same conclusion. It does not matter whether the patient is tall or small, a man or a woman, young or old. In general, people have a very similar form of the area between the head of the upper arm and its surgical neck. This is exactly what the doctors hoped for that every one is within a small tolerance. Now it makes sense to build templates for such a form that patients with broken bones shall get, which in turn positively influences the healing process.

(46)

Bibliography

[1] J. H. Ahlberg, E. N. Nilson, and J. L. Walsh, The Theory of Splines

and Their Application, Academic Press Inc., New York, New York 10003,

USA, 1967.

[2] J. Canny, A Computational Approach To Edge Detection, IEEE Transac-tions on Pattern Analysis and Machine Intelligence, Nov. 1986.

[3] E. R. Davis, Machine Vision Chapter 5, Morgan Kaufmann, 2005

[4] C. A. Glasbey and G. W. Horgan, Image Analysis for the Biological

Sci-ences, John Wiley & Sons, Inc. New York, NY, USA 1995.

[5] https://de.mathworks.com/help/, April, 2019.

[6] S. Peltonen, M. Gabbouj, and J. Astola, Nonlinear filter design:

method-ologies and challenges, Tampere University of Technology, 2001.

[7] O. S. Pianykh, Digital Imaging and Communications in Medicine

(DI-COM), Springer-Verlag, 2008.

[8] https://radiopaedia.org/articles/hounsfield-unit, June, 2019.

[9] Md. N. Sadat, S. Purification, and Md. Shahjahan, A novel approach

to retrieve Bangla text from document image using texture-based image segmentation and optical character recognition, 2013.

[10] L. Shapiro and G. Stockman, Computer Vision, Prentice Hall, 2001.

(47)

List of Figures

2.1 Three scans of different patients . . . 5

2.2 Two scans of different patients . . . 6

3.1 Representation of a digital image . . . 8

3.2 Edge profiles . . . 10

3.3 Pictures of a Gaussian filter applied to an image with different σ 12 3.4 Pictures of different filters applied to the original image . . . . 16

3.5 Canny’s filter applied . . . 17

3.6 Threshold applied to the original scan . . . 18

3.7 A labeled image with two regions . . . 21

3.8 The output of the described method . . . 21

3.9 Two easy examples taken from [9] . . . 23

3.10 Morphological operations on our scan . . . 24

4.1 Threshold image of a scan . . . 27

4.2 After the image ran through the cleaning up algorithm . . . . 28

4.3 Comparison of a different structural elements . . . 29

4.4 Filling up holes within objects . . . 29

4.5 Comparison of different sizes of the structural element . . . 30

4.6 Choosing the area of interest by hand . . . 32

5.1 Our algorithm applied to a picture with too much noise . . . . 34

5.2 Comparison of a tilted/non-tilted picture . . . 35

5.3 Comparison of a small section and a large section . . . 36

5.4 Comparison between interpolation (blue) and approximation (red) . . . 37

5.5 Comparison with a knot sequence that we chose . . . 37

5.6 Curvature from a patient with a small (x) and big bone (o) . . 38

5.7 Curvature from a patient with similar sized bones . . . 39

Referenzen

ÄHNLICHE DOKUMENTE

Granberg. Common to the approaches of these scientists is simpli- fication through linearization of technologies for production, transportation, and consumption of commodities. With

THE AVERAGING ~lliTHOD APPLIED TO THE INVESTIGATION OF SUBSTANTIAL TIME VARYING SYSTEMS OF A HIGHER

© German Development Institute / Deutsches Institut für Entwicklungspolitik (DIE) The Current Column, 17 September 2012.. www.die-gdi.de | www.facebook.com/DIE.Bonn |

SEEK b The Singular Evolutive Ex- tended Kalman Filter is derived from the Extended Kalman Filter by approxima- ting the state error covariance matrix by a matrix of reduced rank

Prostate-specific membrane antigen (PSMA)-targeted radioligand therapy (RLT) has demonstrated high anti-tumor activity in advanced-stage, metastatic castration-resistant prostate

Concerning lexical access, several communities are concerned: engineers from the natural language generation community (NLG), psychologists 28 , user to provide with the input

We assume that initially (i.e., at l*CO, equivalent concentration) the global and regional temperature changes are zero. Of course, at this time there is a

Pleasant highly arousing pictures in terms of valence, irrespective of their arousal degree, were associated with enhanced accuracy rates compared to the unpleasant highly