• Keine Ergebnisse gefunden

Multimedia Databases

N/A
N/A
Protected

Academic year: 2021

Aktie "Multimedia Databases"

Copied!
81
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Multimedia Databases

Wolf-Tilo Balke Janus Wawrzinek

Institut für Informationssysteme

(2)

• Audio Retrieval

- Basics of Audio Data

- Audio Information in Databases - Audio Retrieval

Previous Lecture

(3)

7 Audio Retrieval

7.1 Low Level Audio Features 7.2 Difference Limen

7.3 Pitch recognition

7 Audio Retrieval

(4)

Typical Low Level Features

– Mean amplitude (loudness) – Frequency distribution,

bandwidth

– Energy distribution (brightness)

– Harmonics – Pitch

• Measured

– In the time domain:

any given time is assigned to an amplitude

7.1 Low-level Audio Features

(5)

Fourier analysis:

– Simple characterization by Fourier transform

– Fourier coefficients are descriptive feature vector

Issues:

Time-domain does not show the frequency components of a signal

Frequency-domain does not show

7.1 Fourier Analysis

(6)

Spectrograms: combined representation of time and frequency domain

– Raster image – X-axis as time

– Y-axis as the frequency components

– Gray value of a point is the energy of that frequency at that time

• Allows for example, analysis of regularity

7.1 Spectrogram

(7)

• Spectrogram of the spoken word “durst”

7.1 Spectrogram

(8)

• Use of different low level features for automatic classification of audio files

– Different audio classes have typical values for various properties

– Thus, various typical feature vectors

7.1 Classification

(9)

Distinguish speech and music

• Characteristics are in each case difficult to predict, but there are general trends

• Don’t just use a single feature, but evaluate combination of all features

• Dependent and independent features

7.1 Example

(10)

Bandwidth

– For speech rather low

100–7000 Hz 100-7000 Hz

– In music it tends to be high, 16-20000 Hz

Brightness

(central point of the bandwidth):

– In language it is low

(mainly due to the low bandwidth)

7.1 Example

(11)

Proportion of silence

– Frequent pauses in speech (between words and sentences)

– Low percentage of silence for music (except for solo instruments)

Variance of the zero crossing rate (over time)

– In speech there is a characteristic structure of syllables:

short and long vowels, therefore fluctuating zero crossing rate

7.1 Example

(12)

Simple classification algorithm (Lu and Hankinson, 1998)

7.1 Example

Brightness

Portion of silence

Variance of zero crossing rate

Music

Speech

high

low low

high low

Solo music

Audio

(13)

• Quantitative high / low estimates are highly dependent on the collection

– Determine reference vector for each class by a set of training examples

• Assignment of new audio files to classes is based on minimum distance of its feature vector to one of the reference vectors of the respective class

7.1 Example

(14)

• Low-level Features for Audio Retrieval

– For good feature vectors, the audio signal must be divided into time slots

– Compute a vector for each window

• Calculate low level features in the time window

• Build statistical characteristics about low level features

Perceptional comparison of audio files

7.1 Static Coefficients

(15)

• Four statistical characteristics (Wold and others, 1996)

Loudness (perceived volume)

• Measured as the root mean square (RMS) of the amplitude values (in dB)

• More sophisticated methods take into account differences in the perceptionallity of

parts of the frequency spectrum

7.1 Static Coefficients

(16)

Brightness (perceived brightness)

• Defined as the center-of-gravity of the Fourier spectrum

• Logarithmic scale

• Describes the amount of high frequencies in the signal

Bandwidth (frequency bandwidth)

• Defined as a weighted average of the differences of the

Fourier coefficients to the center-of-gravity of the spectrum

• Amplitudes are used as weights

7.1 Static Coefficients

(17)

Pitch (perceived pitch)

• Calculated from the frequencies and amplitudes of the peaks

within each interval (pitch tracking)

• Pitch tracking is a difficult problem, therefore, often in simpler systems approximated by the fundamental

frequency

7.1 Static Coefficients

(18)

• Time-dependent function for each size in each

time window

• E.g., laughter

– Loudness – Brightness – Bandwidth – Pitch

7.1 Static Coefficients

(19)

Aggregate statistical description of the four functions through:

– Expected value (average value) – Variance (mean square deviation) – Autocorrelation

(self-similarity of the signal)

• Either for each window or for the whole signal (results in 12-dimensional feature vector, such as

7.1 Static Coefficients

(20)

• Example: Laughter

• Each sound has typical values

• Thus we can classify audio files

7.1 Static Coefficients

(21)

Training set of sounds of a class leads to a perceptional model for each class

– Compute the vector of the mean

– Calculate the covariance matrix

7.1 Static Classification

(22)

• For every new audio file, compute the Mahalanobis distance to each class:

• Order the data of a class (either on the threshold or on the minimum distance)

• Determine the probability of correct classification as:

7.1 Static Classification

(23)

• Classification for laughter (Wold and others, 1996)

7.1 Application in Classification

(24)

• Statistical properties for retrieval and

classification work well with short audio data

– Parameters statistically represent human perception – Easy to use, easy to index, query by example

– The only expansion in commercial databases

• DB2 AIV Extenders (development discontinued)

• Oracle Multimedia

7.1 Evaluation

(25)

• Ok for differentiating between speech and music or laughter and music

– But purely statistical values are rather unsuitable in order to classify and differentiate between musical pieces

– Detection of notes from the audio signal (pitch determination) does not work very well

– How does one define the term “melody”?

(especially for queries, query by humming)

7.1 Evaluation

(26)

Recognition of notes from signal

– Variety of instruments

– Overlap of different instruments (and possibly voice)

• Simple, if we have data in MIDI format and the audio signal was synthesized from it

7.1 Problem

(27)

• Definition of “melody”

– Melody = sequence of musical notes – But querying for a melody has to be:

• Invariant under pitch shift (soprano and bass)

• Invariant under time shift

• Invariant under slight variations

• Often, not the sequence of notes themselves, but a sequence of their differences

7.1 Problem

(28)

• Pitch has something to do with the frequency

– Only useful for periodic frequencies, not for noise – Harmonic tones have one main oscillation and several

harmonics

7.2 Frequencies and Pitch

(29)

• Interference make the automatic detection of the dominant pitch difficult

Human perception often differs from physical measurements

– E.g., fundamental frequency ≠ pitch

• However we need the pitch to extract the melody line

7.2 Problem

(30)

Exactly how do people perceive frequency differences?

Difference Limen is the smallest change that is reliably perceived (“just noticeable difference”) – Accuracy varies with different pitch, duration and

volume

– Experimental determination of average values for sine waves (Jesteadt and others, 1977)

7.2 Difference Limen

(31)

• Determined through

psychological testing

– Two tones with 500 ms duration and small tone difference are played one after the other

– Subjects determine whether the second tone was higher or lower

– This results in a psychometric function between the difference in frequency and accuracy of the

classification (50% -100%)

7.2 Difference Limen

(32)

• (Jesteadt and others, 1977): Difference Limen by 0.2%

7.2 Difference Limen

(33)

• 0.2% Difference Limen means that most people can distinguish a 1000 Hz tone from a 1002 Hz tone reliably

7.2 Difference Limen

(34)

• Quality of separation is not uniform across the frequency band

– Worse at high and low frequencies

• Tone duration is important

– Increasingly better for tones between 0 –100 ms, constant for longer

• Volume is important

– Increasingly better for tones

7.2 Difference Limen

(35)

ANSI standard (1994)

„Pitch is that attribute of auditory sensation in terms of which sounds may be ordered on a scale extending from low to high. Pitch depends mainly on the

frequency content of the sound stimulus, but it also depends on the sound pressure and the waveform of the stimulus“

• Typically, limit to the melody line, to distinguish pitch from timbre

– E.g., “s” – and “sh“ sounds, rather have different timbre

7.3 Pitch Definition

(36)

Experiments on frequency scale

– Adaptation of a generated sine wave (with 40 dB) to the perceived pitch by using test subjects (Fletcher, 1934)

– Histograms show the pitch (x Hz) and the compliance of all test subjects (x ± y Hz)

– Multimodal distributions indicate several pitches

• E.g., polyphonic music: some persons concentrate on one instrument while others on another instrument

7.3 Pitch Determination

(37)

Location dependent pitch detection

– Cochlea perceives different frequencies at different places

– High frequencies on

entrance of the cochlea – Low frequencies

at the end of the cochlea – The brain recognizes

which neurons were

7.3 Theoretical Models

(38)

• The stimulation of different neurons along the approximately 35 mm long basilar membrane in the cochlea is a typical pattern for audio coding

The pitch can be detected from

this patterns

7.3 Location-dependent Models

(39)

• Formula in millimeters (z), of the excitation (Greenwood, 1990)

7.3 Location-dependent Models

(40)

• The coding of the sound is not purely location-dependent, but rather by

temporal synchronization of the neurons

– All neurons fire spontaneously in random sequence depending on their refraction characteristic

– When a sound with some frequency starts, it causes more neurons to fire synchronously

– The brain determines the pitch based on an “auto- correlation function” of the pattern

7.3 Time-dependent Models

(41)

• The two models address recognizing the pitches of individual sounds

• Pitch detection is more difficult in the case of complex tones

– Groups of neurons are excited in several locations or with interfering synchronization patterns

• Which neuron excitement is the pitch?

7.3 Theoretical Models

(42)

• Lowest frequency generates harmonics …

thus pitch as the fundamental frequency?

– Psychological experiments with and without the fundamental frequency in the same note, show, that the pitch of the note is still rated the same

– Since synchrony remains the same with or without the fundamental frequency, then we should consider the time-dependent model

• But how do we evaluate the synchrony?

7.3 Fundamental Frequency

(43)

• The hearing analyses complex sounds in different frequency bands

• The hearing processes organizes and integrates the different impressions

• Decide the pitch by matching against the harmonic templates (Goldstein, 1973 )

7.3 Auditory Organization

(44)

• Experiments favor centralized template-matching

– We can feel pitches, even if we split the signal into disjoint units which are

then heard with both ears

– The pitch is synthesized (but it doesn’t work on all partitions; they are usually heard as more pitches) – The listeners can be mislead by ambient noise to

perceive a false template

7.3 Auditory Organization

(45)

• The pitch can also be synthesized as non- occurring frequency e.g., 296 Hz for the

simultaneous play of the following non-harmonic tones:

7.3 Auditory Organization

Apparent 2/3 harmonic

(46)

Pitch is a feature of the frequency at a particular time

– Pitch tracking in the frequency domain – Pitch tracking in the time domain

• Length of time window for frequency spectrum

– At least twice the length of the estimated period

7.3 Pitch Tracking Algorithms

(47)

Requirements

– Frequency resolution in the range of a semitone with the correct octave

– Detection of different instruments with well-defined harmonies (e.g., cello, flute)

– (Recognition of pitches for conversion into symbolic notation in real time for interactive systems)

7.3 Pitch Tracking Algorithms

(48)

HPS-pitch detection is one of the simplest and most robust method in the frequency domain

(Schroeder, 1968), (Noll, 1969)

– A certain frequency range is analyzed for the fundamental frequency

• E.g., 50-250 Hz for male voices

– All frequencies in this area are analyzed for harmonic overtones

7.3 Harmonic Product Spectrum

(49)

• X(ω): strength of the frequency ω in the current time window

• R: number of harmonics to be checked

– E.g., R = 5

• Pitch is then the maximum of the product

spectrum Y over all frequencies ω

1,

ω

2,

... in the frequency range to be investigated

7.3 Harmonic Product Spectrum

(50)

7.3 Harmonic Product Spectrum

(51)

• Problems occur due to noise at frequencies below 50 Hz

• Other problems occur due to the frequent octave errors

– Pitch is often recognized an octave too high

• Rule-based solution:

– If the next closest amplitude under the pitch candidate has approx. half the frequency of the pitch candidate, and its amplitude is above a threshold, then select the pitch one octave below the pitch candidate

7.3 Harmonic Product Spectrum

(52)

• The ML algorithm (Noll, 1969) compares the possibly “ideal” specters with this spectrum and selects the most similar based on the shape

Ideal specters are chains of pulses of the dampening function of the signal window

• For the dampening function

– The signal section, represented by each signal window, (e.g., length 40 ms) is dampened (mainly at the edges)

7.3 Maximum Likelihood Estimator

(53)

• Generation of an ideal spectrum

7.3 Maximum Likelihood Estimator

Convolution

(54)

• The error is now determined between the spectrum being studied and the ideal spectrum Ỹ

ω

(with pitch ω)

• The shares || Y ||

2

and || Ỹ

ω

||

2

remain relatively constant, so the error is given by

7.3 Maximum Likelihood Estimator

(55)

• Pitch is the frequency with minimal error:

• In essence, this method is a “multiplication” of the vector of the input spectrum with a matrix of ideal specters

7.3 Maximum Likelihood Estimator

(56)

• Efficiency and error probability varies with the

degree of resolution (number of ideal specters)

• Good results when the pitch of the analyzed signal is close to the pitch of one of the ideal

7.3 Maximum Likelihood Estimator

(57)

• The most popular procedures in the time domain is the peak-search in the autocorrelation

functions (ACFs)

– It is given by the equation

• ACFs measure the correlation of a signal with the shifted version of the same signal, where τ

7.3 Auto Correlation Functions

(58)

7.3 Auto Correlation Functions

(59)

• Since harmonic signals are strongly

correlated with each other when they are shifted around the pitch, there is a peak in the ACF for

good pitch candidates

• Because of multiplication the procedure is more expensive

– Therefore differences are often used rather than products

7.3 Auto Correlation Functions

(60)

• Average Magnitude Difference Function (AMDF)

– Its minimum is where the ACF has peaks – It is more efficient to compute

7.3 Auto Correlation Functions

(61)

• AMDF

Ψ(τ)

7.3 Auto Correlation Functions

(62)

ACF and AMDF are independent and can

therefore be combined into a robust estimator (Kobayashi and Shinamura, 2000)

• Significantly better fault tolerance against noise (the best results for k=1)

7.3 Auto Correlation Functions

(63)

• Pitch can be relatively robust identified for each time slot

• But normally it is not enough for melody detection just to use sequence of pitches

– Windowing draws error in the individual recognition (continuous pitch changes)

– Attack frequency vs. melody on sustain level

7.3 Melody Recognition

(64)

• HPS pitch detection for a cello

7.3 Melody Recognition

(65)

Filtering of transient faults and spontaneous octave jumps

• Monitor size of the peaks for pitch allocation across multiple windows

– Better resolution for specters in the case of uncertain assignments

Interference with the original signal

7.3 Melody Recognition

(66)

• Post processed pitch detection for a flute

7.3 Melody Recognition

(67)

Problems of melody detection

– Strong polyphony

– Tuning of the instruments, changes with temperature, humidity, or time (can be achieved in ML by adjusting of the ideal specters)

– Even minor changes (just for a second) can lead to alternating detection

of two notes, where only one is played

7.3 Melody Recognition

(68)

• Spectrogram examples

Spectrogram

(69)

• Spectrogram examples

– Venetian Snares - Songs About My Cats - 14 - Look.wav

Spectrogram

(70)

• Spectrogram examples

Spectrogram

(71)

• Speech and music

Classification

(72)

• Speech and music

Classification

(73)

• Speech and music

Classification

(74)

• Harmonics

Frequencies and Pitch

(75)

• Harmonics

Sinus Flute Piano Singer

Tuning Fork Violin

Frequencies and Pitch

(76)

Experiments: pitch

Pitch Determination

(77)

Experiments: pitch

Pitch Determination

(78)

Experiments: pitch

Pitch Determination

(79)

• Audio retrieval systems in WWW

– http://www.musipedia.org – http://www.musicline.de – http://www.midomi.com

Audio Retrieval

(80)

• Audio Retrieval

- Low Level Audio Features - Difference Limen

- Pitch: tracking algorithms

This Lecture

(81)

• Query by Humming

• Melody Representation and Matching

– Parsons-Codes

– Dynamic Time Warping

• Hidden Markov Models

Next lecture

Referenzen

ÄHNLICHE DOKUMENTE

The Pitch Perception Preference Test (Schneider & Bleeck, 2005; Schneider et al., 2005b) enlightens characteristic modes of pitch processing, depending on the focus

While this doctrine is not an exception to United Nation’s Article 2(4), 17 it is also equally clear that contemporary international law, and the UN Charter prohibit states

(7), when the prey and the predator are asymmetric with respect to their valuations, the type of equilibrium depends both on the effectiveness of their armies and on the ratio of

Somme toute, selon Piketty (2013), dans une économie de marché, la plus évidente justification pour l’existence d’un salaire minimum est le fait

The results are such that (i) prenuclear accents are signaled by a late rise (L*H), final nuclear accents by an early rise; (ii) peaks in prefinal nuclear accents are

WHEN IS THE OPTIMAL ECONOMIC ROTATION LONGER THAN THE ROTATION OF MAXIMUM SUSTAINED YKELD..

If narrow focus leads to the activation of alternatives, we should see more fixations to contrastive associates in both narrow focus conditions compared to the

(A) Grid visualization, (B) word cloud, (C) bar charts, (D) mixed color cells, (E) ranked group clusters, (F) one single cell that visualizes contained vectors and the