• Keine Ergebnisse gefunden

Principles of Neural Data Processing, April 2013

N/A
N/A
Protected

Academic year: 2022

Aktie "Principles of Neural Data Processing, April 2013"

Copied!
28
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Principles of Neural Data Processing in the Brain ?

Heinz Horner

Institute für theoretische Physik Ruprecht-Karls-Universität Heidelberg

horner@tphys.uni-heidelberg.de

(2)

Different levels of understanding

Consciousness, emotions free will, behaviour

...

...

Neocortex, midbrain, cerebellum

Cortical areas, nuclei, long distance wiring Cortical layers, columns short distance wiring Neurones, (glia cells),

communication among neurones Dendrite, soma,

axon, synapse

Membrane, ionic channels, ionic pumps, receptors, neurotransmitters

...

(3)

Anatomy of the human brain, cortical Areas

Corpus collosum Connection between

hemispheres Cingulate gyrus

Emotional, cognitive and motor tasks

Thalamus

Relay station to the cerebral cortex

Hippocampus Long term memory Maps for navigation Fornix

Formation of memory

Mammillary body Formation of

memory Amygdala

Emotions fear, reward Hypothalamus

Temperature sleep ...

Olfactory bulb Smell

Stria terminalis Output from

amygdala

Brainstem Prefrontal

cortex Planing

Motor cortex Motion

Somatosensory cortex Sense of touch

Parietal lobe Integration of

sensory information

Visual cortex Vision Angular gyrus

Verbal association

Cerebellum Coordination

of motions Wernicke‘s area Auditory cortex

Hearing Temporal lobe

Organisation of sensory input Memory Broca‘s area

Speech

Limbic system

Cerebral cortex

(4)

Cortical layers

Grey matter

Cell bodies:

Neurones, Glia cells Neuropil

White matter

Myelinated axons (long distance connections)

5 mm

(5)

Corpus collosum Connection between

hemispheres Cingulate gyrus

Emotional, cognitive and motor tasks

Thalamus

Relay station to the cerebral cortex Fornix

Formation of memory

Mammillary body Hypothalamus

Temperature sleep ...

Olfactory bulb Smell

Stria terminalis Output from

amygdala

Brainstem Respiration Blood circulation Prefrontal

cortex Planing

Motor cortex Motion

Somatosensory cortex Sense of touch

Parietal lobe Integration of

sensory information

Visual cortex Vision Angular gyrus

Verbal association

Cerebellum Coordination

of motions Wernicke‘s area Spoken language Auditory cortex

Hearing Temporal lobe

Organisation of sensory input Memory Personality Broca‘s area

Speech

Long ranged connections

Visual pathway

(6)

Cortical layers

Pyramidal neurones (c) excitatory

long ranged connections

Spiny stellate cells (d) excitatory input from thalamus

Basket cells (e) inhibitory project to soma of

pyramidal neurones

Marinotti cells (f) inhibitory project into layer I

Associative fibres (a)

from other parts of the cortex

Specific fibres (b)

from thalamus

Axons from pyramidal cells

I II

III

IV

V

VI

Cell types:

Afferent fibres:

Efferent fibres:

(7)

Synapse

inhibitory

Dendrite Synapse

excitatory

1 µ

Grey matter

(8)

Coarse wiring:

specific,

genetically predetermined

Wiring in detail:

determined by chance and learning

Cortical wiring

(9)

Thickness of the cortex 3 - 4 mm Area of the cortex 0.5 qm Neurons 10 - 10 Neurons per mm 10 Synapses per neuron 10 Synapses 10 - 10 Length of dendrites per neuron 10 mm Length of dendrites per mm 400 m Length of axons per mm 3000 m Time scale 10 - 100 msec Computing power > 1 PetaFLOPS Storage capacity > 100 Terabyte Energy consumption 20 W

3

3 3

10 11

5 4

14 15

Some numbers

?

(10)

McCulloch-Pitts neuron, perceptron

Presynaptic neurons 1 · · · N

Activity, firing rate fi

Synaptic efficacy Wi

f f

1

f

i

f

N

Potential U = X

i

Wi fi

Threshold #

Firing rate f = ⇥(U #)

S Dendrite:

incoming signals, membrane potential

Soma:

threshold

Axon:

outgoing signals, spikes

Decision plane

X

i

Wifi = #

f2

f1 S W~

o +

N = 2

(11)

Perceptron learning

Learning rule: change couplings W

i

⇠ ± ⇠

iµ

f

2

f

1

S

W ~

+

o

W ~

µ

⇠ ~

µ

Hebb’s rule: (D.O.Hebb 1949, S. Freud, S. Exner 1895)

If an axon of cell A is close enough to cell B, such that B is repeatedly excited by A and firing, a change takes place such that the ability of cell A to stimulate cell B is increased.

µ

= 1

Experiment: Bi, Poo (1998)

Learning patterns with desired output

K

iµ

µ

= { 1, 0 }

(12)

Estimate of the capacity of the perceptron

How many (random) patterns with mean activity can be classified ?a Probability and .P⌘=0=a P⌘=1= 1 a

For random couplings the probabilities for an actual output are and .

Wi f ={1,0} Pf=1=a Pf=0= 1 a

This means . P⌘=1,f=0=P⌘=0,f=1=a(1 a)

For patterns a subset of patterns has to be ‘embedded‘ , i.e.

the couplings have to be determined such that

A Ax = 2a(1 a)A

This results in linear equations for the couplings . They can be solved for

N Wi 2a(1 a)A

A  2a(1N a)

X

i

Wiiµ=# ± "

(13)

Estimate of the capacity of the perceptron

Shannon information

0 2 4 6 8 10

0.01 0.1

I

a 0.5

The total number of possible patterns with mean activity isa

Perceptron:

linear separable classification,

increased capacity and information content for sparse coding

N!

(aN)! ((1 a)N)! ⇠e{aln(a)+(1 a) ln(1 a)}N

N/2a(1 a)

Capacity with and .c(a= 12) = 2 c(a! 1)⇠ 2a1

I

N = 12n 1

1 a log2(1a) + 1a log2(11a)o c= AmaxN

c= AmaxN

(14)

Iterate presenting patterns modify couplings only if according to Hebb‘s rule

⇠~µ

fµ6=⌘µ

µW~ =± ⇠~µ

a⌧1

µ= 1 ⇠iµ= 1

Perceptron learning

Learning:

learning regulated by novelty, reward, e.t.c reduced learning precision for sparse coding

W

W

aN1

Modification for sparse coding : Learning only if and

Required accuracy for each learning step:

(15)

Divergent preprocessing Input layer

N

Output perceptrons Hidden layer

L

Example:

Perceptron with random preprocessing:

The capacity is determines by the size of the hidden layer. It is able to handle non linear separable problems.

L

Divergent preprocessing:

increased capacity,

preprocessing random or determined by unsupervised learning, sparse coding in the hidden layer.

Liquid computing

Cortex:

Almost all connections are intracortical, for example

total number of connections

connections between hemispheres cortex cerebellum

optical nerve ear cortex 108

107 106 105

1014

(16)

HD-Stud.Days April 2013 16

Associative memory, attractor network

Associative recall:

Any part of the items memorized can trigger the recall of the complete item.

Unlike in computers there is no address under which items are stored.

The storage is distributed. A single item is shared by many neurons.

A given neuron can be part of many items stored.

Attractor network, Hopfield model:

Fully or partially connected network of McCulloch-Pitts neurons

Construct an “energy” such that the patterns to be stored form

local minima, surrounded by

basins of attraction (gradient dynamics)

Kazimierz 10-17.06.05 45

Different forms of long term memory

conscious unconscious

Kazimierz 10-17.06.05 46

Associative recall:

Any part of the items memorized can trigger the recall of the complete item

Unlike in computers there is no address under which items are stored The storage is distributed. A single item might reside in several

areas of the cortex

Attractor network, the Hopfield model

(Fully) connected network of McCulloch-Pitts neurons or

Construct an “energy” such that the patterns to be stored form

local minima, surrounded by

basins of attraction (gradient dynamics)

f(U) = Θ(U) sign(U)

The Hopfield model

Patterns to be memorized: random Hebb learning:

Firing rate: or

Membrane potential:

Temporal evolution: sequential updating

“Energy”, cost function:

Change of energy: for

for

ξiµ = ±1 Wij = N1

!A

µ=1

ξiµξjµ

fi =

!1

0

" ! 1

−1

"

Ui =

!

j

Wijfj =

!

µ

ξiµmµ mµ = N1

!

j

ξjµfj

fi(t + τ) = sign!

Ui(t)" E(t) = 12

!

ij

fi(t)Wijfj(t) = N2

!

µ

mµ(t)2

fi(t)Ui(t) < 0 fi(t)Ui(t) > 0

iE(t) = 2 sign!

Ui(t)"

Ui(t) < 0

iE(t) = 0

Montag, 15. April 13

(17)

Attractor network, Hopfield model, sparse patterns

Signal to noise analysis:

assume the firing state is near the pattern :i fi=⇠i m

potential Ui=X

1

Wijfi= i m + X

µ6=

iµmµ Patterns to be memorized: with =

1

iµ 0

iµ

= a ⌧ 1 µ= 1..A

Hebb‘s learning: Wij = 1

aN

X

µ

µijµ

‘overlap‘ between actual firing state and pattern :fiiµ mµ= aN1 X

i

iµfi

average over random patterns :iµ hmµi=a m hm2µi hmµi2= 1

N m2

2(m ) =hUi2i hUii2 = aAN m2 hUii = i m + ¯U

The distribution is a Gaussian with width centered around P(Ui) (m ) hUii P(U) = 1

p e (U 2 2⇠m )2

(18)

Attractor network, Hopfield model, sparse patterns cont.

P(U) = 1

p2⇡ (m)2 e

(U ⇠m)2 2 (m)2

# U

a P1(U) (1 a)P0(U)

m

Retrieval dynamics: (m)

@tf= Z

#

dU P(U) f

Retrieval dynamics:

with constraint a f1 + (1 a)f0 = a

non trivial solution for information per synapse A < N 2 aln(a1)

I

N2 a!

!0

1

2 ln(2) 0.72

Attractor network:

patterns are stored as fixed points of retrieval dynamics

reduced learning precision for sparse coding

(19)

Dynamic attractors, transients

Non symmetric couplings Retarded couplings

Fatigue after ongoing firing

Forward excitation and inhibition Special delay lines (Cerebellum)

Fixed points (associative memory) Limit cycles (rhythm generator) Transients (motion generator,

short time memory, liquid computing) Special delay lines (Cerebellum)

Wij 6= Wji Ui(t) = X

j

Wij(⌧)fj(t )

t

#i(t)

fi(t)

Dynamic attractors, transients

various tasks various mechanisms

(20)

Winner takes all mechanism

Task: select the neuron(s) with the highest input Excitatory neurons with input

hi

Inhibitory neuron

i

o

Synapses

Wio=W Woi= X

F(t + ) =X

i

fi(t + ) =X

i

⇥(hi X fo(t) #) fo(t + ) = ˜⇥(W F(t) #o)

F(fo) fo

F fo(F)

Winner takes all:

unsupervised learning retinotopic and other maps

vector quantization

can lead to oscillations (limit cycle)

(21)

Visual system:

mapping retina NGL V1

Topology preserving (retinotopic) maps

Somatosensory cortex Motorcortex

Auditory system:

tonotopic maps

(22)

Topology preserving (retinotopic) maps

Input space

Target space x

y

Topology preserving mapping of an input space onto a target space (Kohonen map)

x y

Initial state:

no structure in the couplings

Apply correlated signals at random

topology in input space is established by the patterns

Target space:

lateral short ranged excitatory couplings

establishes topology in target space

winner takes all mechanism

Hebb‘s learning for

W(x, y)

xo

⇠(x) = '(x xo)

W(x, y)

(23)

Topology preserving (retinotopic) maps

Topology preserving mapping of an input space onto a target space (Kohonen map)

x y

Input space

Target space x

y

Initial state:

no structure in the couplings

Apply correlated signals at random

topology in input space is established by the patterns

Target space:

lateral short ranged excitatory couplings

establishes topology in target space

winner takes all mechanism

Hebb‘s learning for

W(x, y)

xo

⇠(x) = '(x xo)

W(x, y)

Topology preserving maps:

unsupervised learning

with correlated input patterns

(24)

HD-Stud.Days April 2013 24

Experiment on awake monkeys:

three types of dynamic dot patterns ( ) are repeatedly presented. The spike trains of a neuron in the visual cortex are recorded. Each pattern is presented 50 times. The spike trains are plotted individually as function of time past

beginning of the presentation.

The histogram exhibits clear pattern dependent structures, whereas individual spike trains exhibit strong fluctuations.

This and similar experiments suggest coding by firing rates of subpopulations.

Single spike or firing rate coding?

Bair and Koch 11

j024 140

spk/s c=1.0c=0.5c=0.0

0 500 1000 1500 2000

Time (msec)

Figure 5:Temporal modulation disappears for highly coherent stimuli. The spike trains and PSTHs demonstrate that the stimulus-locked temporal modulation present for incoherent motion (c=0) and for partially coherent motion (c=0.5) was virtually absent during the sustained period of the response to coherent motion (c=1). This suggests that temporal dynamics of a higher order than those found in rigid translation are necessary to induce a specifi c and unique time course in the spike discharge pattern.

Bair and Koch 11

j024 140

spk/s c=1.0 c=0.5 c=0.0

0 500 1000 1500 2000

Time (msec)

Figure 5: Temporal modulation disappears for highly coherent stimuli. The spike trains and PSTHs demonstrate that the stimulus-locked temporal modulation present for incoherent motion (c = 0) and for partially coherent motion (c = 0 . 5) was virtually absent during the sustained period of the response to coherent motion (c = 1). This suggests that temporal dynamics of a higher order than those found in rigid translation are necessary to induce a specifi c and unique time course in the spike

100ms c = 0.0, 0.4, 1.0

Coding by firing rates of subpopulations:

structures on time scale ~ 20 ms mean firing rate < 100 Hz Exceptions? Auditory system?

Montag, 15. April 13

(25)

HD-Stud.Days April 2013 25

The role of inhibitory neurons, balanced networks:

Experiment:

Fluctuations of the membrane potential (visual cortex of a cat)Ui(t)

1.5 The Neural Code 33

in vitro in vivo in vivo

20mV 100 ms

current injection current injection visual stimulation

Figure 1.17: Intracellular recordings from cat V1 neurons. The left panel is the response of a neuron in an in vitro slice preparation to constant current injection.

The center and right panels show recordings from neurons in vivo responding to either injected current (center), or a moving visual image (right). (Adapted from Holt et al, 1996.)

be captured by the Poisson model of spike generation, the spike generating mechanism itself in real neurons is clearly not responsible for the variabil- ity. We explore ideas about possible sources of spike-train variability in chapter 5.

Some neurons fire action potentials in clusters or bursts of spikes that can- not be described by a Poisson process with a fixed rate. Bursting can be included in a Poisson model by allowing the firing rate to fluctuate to de- scribe the high rate of firing during a burst. Sometimes the distribution of bursts themselves can be described by a Poisson process (such a doubly stochastic process is called a Cox process).

1.5 The Neural Code

The nature of the neural code is a topic of intense debate within the neuro- science community. Much of the discussion has focused on whether neu- rons use rate coding or temporal coding, often without a clear definition of what these terms mean. We feel that the central issue in neural coding is whether individual action potentials and individual neurons encode inde- pendently of each other, or whether correlations between different spikes and different neurons carry significant amounts of information. We there- fore contrast independent-spike and independent-neuron codes with cor- relation codes before addressing the issue of temporal coding.

In vitro

(slice)

:

regular spiking pattern for constant current injection in slices the dendritic trees are almost completely cut.

In vivo:

irregular spiking pattern for constant current injection or external stimulus.

dendritic trees are in tact and the neuron is exposed to fluctuating stimulations from other neurons

Montag, 15. April 13

(26)

The role of inhibitory neurons, balanced networks:

Model calculations:

integrate-and-fire-neuron exposed to external noise d

dtU(t) = 1

o

n

Uext + ¯U + ⌘(t) U(t)o

‘fluctuating force‘ due to coupling to populations of excitatory and inhibitory neurons⌘(t)

hi = 0

2

= Tnoise

firing rates of background excitatory and inhibitory neuronsfe fi U¯ Weefe Weifi Tnoise Weefe + Weifi

weak noise

strong noise

Response characteristic:

hf(Uext)i

Uext weak noise

strong noise

(27)

Spiking activity strongly influenced by noise

increased sensitivity, tunable response characteristic

The role of inhibitory neurons, balanced networks:

Up and down state?

Oscillations?

Binding:

binding by synchronization of spindle oscillations

(28)

Conclusion?

New techniques reveal a broad spectrum of exciting and detailed answers.

What was

the question?

Referenzen

ÄHNLICHE DOKUMENTE

Self-Organization, Nonlinearities, Visual Cortex, (Anti-)Hebbian Learning, Lateral Plasticity A neural network model with incremental Hebbian learning of afferent and lateral synap-

In particu- lar (1) on the level of long-distance dependencies, the majority of them can be approximated by using a labelled DG, context-free finite-state based pat- terns,

pressure transducer crane piston rod side -U236 =E/17.2 Druckaufnehmer Fy-Jib Kolbenseitig. pressure transducer fly-jib piston side -U237 =E/17.3 Druckaufnehmer

S ome passengers may also have health problems affect- ing the veins: pregnancy, obesity , varicose veins and heart problems.. Even healthy people may be affected by long peri-

-U230 =E/5.1 Druckaufnehmer Kran Kolbenseitig pressure transducer crane piston side -U231 =E/5.2 Druckaufnehmer Kran Stangenseite. pressure transducer crane piston

pressure transducer crane piston rod side -U236 =E/20.2 Druckaufnehmer Fly-Jib Kolbenseitig. pressure transducer fly-jib piston side -U237 =E/20.3 Druckaufnehmer

P&amp;P distinguish between fake BR and real BR and suggest that Greek is a language with fake BR, in spite of the evidence from the agreement patterns (the higher verb

7p14.3.F1 AATCCTATCAATTCCTCCCCATGTGTCAAG 7p14.3.F2 CTATCAATGGCCCTTCCAAGCTATCAGGTA 7p14.3.F3 GATTGCAGATTTCTGGGAGAATAAGGACGA 7p14.3.F4 CAGGCAAATTGGAAGATGAGTAGGAAGGAG