Tải bản đầy đủ (.pdf) (14 trang)

Báo cáo hóa học: "Research Article Reconstruction of Sensory Stimuli Encoded with Integrate-and-Fire Neurons with Random Thresholds" pptx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (1.66 MB, 14 trang )

Hindawi Publishing Corporation
EURASIP Journal on Advances in Signal Processing
Volume 2009, Article ID 682930, 14 pages
doi:10.1155/2009/682930
Research Article
Reconstruction of Sensory Stimuli Encoded with
Integrate-and-Fire Neurons with Random Thresholds
Aurel A. Lazar and Eftychios A. Pnevmatikakis
Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
Correspondence should be addressed to Eftychios A. Pnevmatikakis,
Received 1 January 2009; Accepted 4 April 2009
Recommended by Jose Principe
We present a general approach to the reconstruction of sensory stimuli encoded with leaky integrate-and-fire neurons with random
thresholds. The stimuli are modeled as elements of a Reproducing Kernel Hilbert Space. The reconstruction is based on finding
a stimulus that minimizes a regularized quadratic optimality criterion. We discuss in detail the reconstruction of sensory stimuli
modeled as absolutely continuous functions as well as stimuli with absolutely continuous first-order derivatives. Reconstruction
results are presented for stimuli encoded with single as well as a population of neurons. Examples are given that demonstrate the
performance of the reconstruction algorithms as a function of threshold variability.
Copyright © 2009 A. A. Lazar and E. A. Pnevmatikakis. This is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
1. Introduction
Formal spiking neuron models, such as integrate-and-fire
(IAF) neurons, encode information in the time domain
[1]. Assuming that the input signal is bandlimited and the
bandwidth is known, a perfect recovery of the stimulus based
upon the spike times can be achieved provided that the spike
density is above the Nyquist rate [2]. These results hold
for a wide variety of sensory stimuli, including audio [3]
and video [4], encoded with a population of IAF neurons.
More generally, Time Encoding Machines (TEMs) encode


analog amplitude information in the time domain using only
asynchronous circuits [2]. Time encoding has been shown
to be closely related to traditional amplitude sampling. This
observation has enabled the application of a large number of
recovery results obtained for signals encoded using irregular
sampling to time encoding.
A common underlying assumption of TEM models
is that the input stimulus is bandlimited with known
bandwidth. Implicit in this assumption is that the signal is
defined on the entire real line. In sensory systems, however,
the bandwidth of the signal entering the soma of the
neuron is often unknown. Ordinarily good estimates of the
bandwidth are not available due to nonlinear processing in
the upstream transduction pathways, for example, contrast
extraction in vision. In addition, stimuli have limited time
support and the neurons respond with a finite number of
spikes.
Furthermore, neuronal spike trains exhibit variability
in response to identical input stimuli. In simple formal
spiking neuron models, such as IAF neurons, this variability
is associated with random thresholds [5]. IAF neurons with
random thresholds have been used to model the observed
spike variability of certain neurons of the fly visual system
[6]. Linear recovery methods were proposed in [7]foran
ideal IAF neuron with exponentially distributed thresholds
that exhibits Poisson statistics.
Aperfectrecoveryofastimulusencodedwithaformal
neuron model with random threshold along the lines of [3]
is not possible, and an alternative reconstruction formalism
is needed. Consequently, a major goal is the development

of a mathematical framework for the representation and
recovery of arbitrary stimuli with a population of neurons
with random thresholds on finite time intervals. There are
two key elements to such an extension. First, the signal
model is defined on a finite time interval and, therefore, the
bandlimited assumption does not hold. Second, the number
of degrees of freedom in signal reconstruction is reduced
2 EURASIP Journal on Advances in Signal Processing
by either introducing a natural signal recovery constraint
[8] or by assuming that the stimuli are restricted to be
“smooth.”
In this paper, we propose a Reproducing Kernel Hilbert
Space (RKHS) [9] framework for the representation and
recovery of finite length stimuli with a population of leaky
integrate-and-fire (LIF) neurons with random thresholds.
More specifically, we set up the recovery problem as a
regularized optimization problem, and use the theory of
smoothing splines in RKHS [10]toderiveanoptimal
(nonlinear) solution.
RKHSs play a major role in statistics [10] and in
machine learning [11]. In theoretical neuroscience they have
been little used with the exception of [12]. In the latter
work, RKHSs have been applied in a probabilistic setting
of point process models to study the distance between
spike trains of neural populations. Spline models have
been used in computational neuroscience in the context of
estimating the (random) intensity rate from raster neuron
recordings [13, 14]. In this paper we will bring the full
power of RKHSs and the theory of smoothing splines to
bear on the problem of reconstruction of stimuli encoded

with a population of IAF neurons with random thresh-
olds.
Although the methodology employed here applies to
arbitrary RKHSs, for example, space of bandlimited stimuli,
we will focus in this paper on Sobolev spaces. Signals in
Sobolev spaces are rather natural for modeling purposes
as they entail absolutely continuous functions and their
derivatives. A more precise definition will be given in
the next section. The inner-product in Sobolev spaces is
based on higher-order function derivatives. In the RKHS
of bandlimited functions, the inner-product formulation of
the t-transform is straightforward because of the simple
structure of the inner-product in these space [3, 4]. However
this is not the case for Sobolev spaces, since the inner-product
has a more complex structure. We will be interpreting the t-
transform as a linear functional on the Sobolev space, and
then through the use of the Riesz representation theorem,
rewrite it in an inner-product form that is amenable to
further analytical treatment. We can then apply the key
elements of the theory developed in [10].
This paper is organized as follows. In Section 2 the
problem of representation of a stimulus defined in a class of
Sobolev spaces and encoded by leaky integrate-and-fire (LIF)
neurons with random thresholds is formulated. In Section 3
the stimulus reconstruction problem is addressed when the
stimuli are encoded by a single LIF neuron with random
threshold. The reconstruction algorithm calls for finding
a signal that minimizes a regularized optimality criterion.
Reconstruction algorithms are worked out in detail for the
case of absolutely continuous stimuli as well as stimuli with

absolutely continuous first-order derivatives. Two examples
are described. In the first, the recovery of a stimulus from
its temporal contrast is given. In the second, the recovery of
stimuli encoded with a pair of rectifier neurons is presented.
Section 4 generalizes the previous results to stimuli encoded
with a population of LIF neurons. The paper concludes with
Section 5.
2. Encoding of Stimuli with LIF Neurons with
Random Thresholds
In this section we formulate the problem of stimulus
encoding with leaky integrate-and-fire neurons with random
thresholds. The stimuli under consideration are defined on a
finite time interval and are assumed to be functions that have
a smoothness property. The natural mathematical setting for
the stimuli considered in this paper is provided by function
spaces of the RKHS family [15]. A brief introduction to
RKHSs is given in Appendix A.1.
We show that encoding with LIF neurons with random
thresholds is akin to taking a set of noisy measurements on
the stimulus. We then demonstrate that these measurements
can be represented as projections of the stimulus on a set of
sampling functions.
2.1. Modeling of Sens ory Stimuli as Elements of RKHSs. There
is a rich collection of Reproducing Kernel Hilbert Spaces
that have been thoroughly investigated and the modeler can
take advantage of [9].Inwhatfollowswerestrictourselves
to a special class of RKHSs, the so-called Sobolev spaces
[16]. Sobolev spaces are important because they combine the
desirable properties of important function spaces (e.g., abso-
lute continuous functions, absolute continuous derivatives,

etc.), while they retain the reproducing property. Moreover a
parametric description of the space (e.g., bandwidth) is not
required.
Stimuli are functions u
= u(t), t ∈ T ,definedas
elements of a Sobolev space S
m
= S
m
(T ), m ∈ N

.The
Sobolev space S
m
(T ), for a given m, m ∈ N

,isdefinedas
S
m
=

u | u, u

, , u
(
m
−1
)
absolutely continuous, u
(m)

∈ L
2
(
T
)

,
(1)
where L
2
(T ) is the space of functions of finite energy over
the domain T . We will assume that the domain T is a finite
interval on
R and, w.l.o.g, we set it to T = [0, 1]. Note that
the space S
m
can be written as S
m
:= H
0
⊕ H
1
(⊕ denotes
the direct sum) with
H
0
:= span

1, t, , t
m−1


,
H
1
:=

u | u ∈ C
m−1
(
T
)
, u
(
m
)
∈ L
2
(
T
)
,
u
(
0
)
= u

(
0
)

=···=u
(
m
−1
)
(
0
)
= 0

,
(2)
where C
m−1
(T ) denotes the space of m − 1 continuously
differentiable functions defined on T . It can be shown [9]
that the space S
m
endowed with the inner-product ·, · :
S
m
×S
m
→ R given by
u, v :=
m−1

i=0
u
(i)

(
0
)
v
(i)
(
0
)
+

1
0
u
(m)
(
s
)
v
(m)
(
s
)
ds (3)
is an RKHS with reproducing kernel
K
(
s, t
)
=
m


i=1
χ
i
(
s
)
χ
i
(
t
)
+

1
0
G
m
(
s, τ
)
G
m
(
t, τ
)
dτ,(4)
EURASIP Journal on Advances in Signal Processing 3
with χ
i

(t) = t
i−1
/(i − 1)! and G
m
(t, s) = (t − s)
m−1
+
/(m −
1)!. Note that the reproducing kernel of (4)canbewrittenas
K(s, t)
= K
0
(s, t)+K
1
(s, t)with
K
0
(
s, t
)
=
m

i=1
χ
i
(
s
)
χ

i
(
t
)
,
K
1
(
s, t
)
=

1
0
G
m
(
s, τ
)
G
m
(
t, τ
)
dτ.
(5)
The kernels K
0
, K
1

are reproducing kernels for the spaces
H
0
, H
1
endowed with inner products given by the two terms
on the right-hand side of (3), respectively. Note also that the
functions χ
i
(t), i = 1,2, , m, form an orthogonal base in
H
0
.
Remark 1. The norm and the reproducing kernel in an RKHS
uniquely determine each other. For examples of Sobolev
spaces endowed with a variety of norms, see [9].
2.2. Encoding of Stimuli with a LIF Neuron. Let u
= u(t), t ∈
T , denote the stimulus. The stimulus biased by a constant
background current b is fed into a LIF neuron with resistance
R and capacitance C. Furthermore, the neuron has a random
threshold with mean δ and variance σ
2
. The value of the
threshold changes only at spike times, that is, it is constant
between two consecutive spikes. Assume that after each spike
the neuron is reset to the initial value zero. Let (t
k
), k =
1, 2, , n + 1, denote the output spike train of the neuron.

Between two consecutive spike times the operation of the LIF
neuron is fully described by the t-transform [1]

t
k+1
t
k
(
b + u
(
s
))
exp


t
k+1
−s
RC

ds = Cδ
k
,(6)
where δ
k
is the value of the random threshold during the
interspike interval [t
k
, t
k+1

). The t-transform can also be
rewritten as
L
k
u = q
k
+ ε
k
,(7)
where L
k
: S
m
→ R is a linear functional given by
L
k
u =

t
k+1
t
k
u
(
s
)
exp


t

k+1
−s
RC

ds,
q
k
= Cδ −bRC

1 − exp


t
k+1
−t
k
RC

ε
k
= C
(
δ
k
−δ
)
,
,(8)
and the ε
k

’s are i.i.d. random variables with mean zero and
variance (Cσ)
2
for all k = 1, 2, , n. The sequence (L
k
), k =
1, 2, , n, has a simple interpretation; it represents the set of
n measurements performed on the stimulus u.
Lemma 1. The t-transform of the LIF neuron can be written
in inner-product form as
φ
k
, u=q
k
+ ε
k
,(9)
where
φ
k
(
t
)
=

t
k+1
t
k
K

(
t, s
)
exp


t
k+1
−s
RC

ds, (10)
q
k
, ε
k
are g iven by (8), k = 1,2, , n,and·, · is the inner-
product (3) for the space S
m
, m ∈ N. In addition the ε
k
’s are
i.i.d. random variables with mean zero and variance (Cσ)
2
for
all k
= 1, 2, , n.
Proof. We will rewrite the linear functionals of (7) in inner-
product form, that is, as projections in S
m

. The existence of
an inner-product form representation is guaranteed by the
Riesz lemma (see Appendix A.2). Thus, there exists a set of
functions φ
k
∈ S
m
, such that
L
k
u =φ
k
, u, (11)
for all k
= 1, 2, , n. Since S
m
is a RKHS, we also have that
φ
k
(
t
)
=φ
k
, K
t
=L
k
K
t

=

t
k+1
t
k
K
(
t, s
)
exp


t
k+1
−s
RC

ds,
(12)
where K
t
(·) = K(·, t), for all t ∈ T .
The main steps of the proof of Lemma 1 are schematically
depicted in Figure 1.Thet-transform has an equivalent
representation as a series of linear functionals acting on
the stimulus u. These functionals are in turn represented as
projections of the stimulus u on a set of functions in the space
S
m

.
2.3. Encoding of Stimuli with a Population of LIF Neurons. In
this section we briefly discuss the encoding of stimuli with
a population of LIF neurons with random thresholds. The
presentation follows closely the one in Section 2.2.Themain
result obtained in Lemma 2 will be used in Section 4.
Consider a population of N LIF neurons where neuron j
has a random threshold with mean δ
j
and standard deviation
σ
j
,biasb
j
, resistance R
j
, and capacitance C
j
. Whenever the
membrane potential reaches its threshold value, the neuron
j fires a spike and resets its membrane potential to 0. Let t
j
k
denote the kth spike of neuron j,withk = 1, 2, ,n
j
+1.
Here n
j
+ 1 denotes the number of spikes that neuron j
triggers, j

= 1, 2, , N.
The t-transform of each neuron j is given by (see also
(6))

t
j
k+1
t
j
k

b
j
+ u
(
s
)

exp



t
j
k+1
−s
R
j
C
j



ds = C
j
δ
j
k
, (13)
for all k
= 1, 2, , n
j
,andj = 1, 2, , N.
Lemma 2. The t-transform of the LIF population can be
written in inner-product form as

1
C
j
σ
j
φ
j
k
, u

=
1
C
j
σ

j
q
j
k
+ ε
j
k
, (14)
4 EURASIP Journal on Advances in Signal Processing
(t
k
)

t
k+1
t
k
(b + u(s))e
−(t
k+1
−s)/RC
ds = Cδ
t-transform equations
L
k
u = q
k
Linear
functional
φ

k
, u=q
k
Inner productSpike train
Figure 1: The operator interpretation of stimulus encoding with a LIF neuron.
w ith φ
j
k
, q
j
k
essentially given by (10), (8) (plus an added super-
script j), and
ε
j
k
=
δ
j
k
−δ
j
σ
j
(15)
are i.i.d. random variables with mean zero and variance one for
all k
= 1, 2, , n
j
, and j = 1, 2, , N.

Proof. Largely the same as the proof of Lemma 1.
3. Reconstruction of Stimuli Encoded with
a LIF Neuron with Random Threshold
In this section we present in detail the algorithm for the
reconstruction of stimuli encoded with a LIF neuron with
random threshold. Two cases are considered in detail. First,
we provide the reconstruction of stimuli that are modeled
as absolutely continuous functions. Second, we derive the
reconstruction algorithm for stimuli that have absolutely
continuous first-order derivatives. The reconstructed stimu-
lus satisfies a regularized optimality criterion. Examples that
highlight the intuitive properties of the results obtained are
given at the end of this section.
3.1. Reconstruction of Stimuli in Sobolev Spaces. As shown in
Section 2.2, a LIF neuron with random threshold provides
the reader with the set of measurements
φ
k
, u=q
k
+ ε
k
, (16)
where φ
k
∈ S
m
for all k = 1, 2, , n. Furthermore, (ε
k
),

k
= 1, 2, , n, are i.i.d. random variables with zero mean
and variance (Cσ)
2
.
An optimal estimate
u of u minimizes the cost functional
1
n
n

k=1

q
k
−φ
k
, u

2
+ λP
1
u
2
, (17)
where P
1
: S
m
→ H

1
is the projection of the Sobolev space
S
m
to H
1
. Intuitively, the nonnegative parameter λ regulates
the choice of the estimate
u between faithfulness to data
fitting (λ small) and maximum smoothness of the recovered
signal (λ large). We further assume that the threshold of the
neuron is modeled as a sequence of i.i.d. random variables

k
), k = 1,2, , n, with Gaussian distribution with mean
δ and variance σ
2
. Consequently, the random variables (ε
k
),
k
= 1, 2, , n, are i.i.d. Gaussian with mean zero and
variance (Cσ)
2
. Of main interest is the effect of random
threshold fluctuations for σ
 δ. (Note that for σ  δ the
probability that the threshold is negative is close to zero). We
have the following theorem.
Theorem 1. Assume that the stimulus u

= u(t), t ∈ [0, 1], is
encoded into a time sequence (t
k
), k = 1,2, , n, with a LIF
neuron with random threshold that is fully described by (6).
The optimal estimate
u of u is given by
u =
m

i=1
d
i
χ
i
+
n

k=1
c
k
ψ
k
, (18)
where
χ
i
(
t
)

=
t
i−1
(
i
−1
)
!
,
ψ
k
(
t
)
=

t
k+1
t
k
K
1
(
t, s
)
exp


t
k+1

−s
RC

ds,
(19)
and the coefficients [c]
k
= c
k
and [d]
i
= d
i
satisfy the matrix
equations
(
G + nλI
)
c + Fd
= q,
F

c = 0,
(20)
w ith [G]
kl
=ψ
k
, ψ
l

, [F]
ki
=φ
k
, χ
i
, and [q]
k
= q
k
,forall
k, l
= 1, 2, , n,andi = 1, 2, , m.
Proof. Since the inner-product
φ
k
, u describes the mea-
surements performed by the LIF neuron with random
thresholds described by (6), the minimizer of (17)isexactly
the optimal estimate of u encoded into the time sequence
(t
k
), k = 1, 2, , n. The rest of the proof follows from
Theorem 3 of Appendix A.3.
The representation functions ψ
k
are given by
ψ
k
(

t
)
= P
1
φ
k
=P
1
φ
k
, K
t

=
φ
k
, P
1
K
t
=L
k
K
1
t
=

t
k+1
t

k
K
1
(
t, s
)
exp


t
k+1
−s
RC

ds.
(21)
Finally, the entries of the matrices F and G are given by
[
F
]
ki
=

t
k+1
t
k
χ
i
(

s
)
exp


t
k+1
−s
RC

ds,
[
G
]
kl
=

ψ
k
, ψ
l

=

T
ψ
(
m
)
k

(
s
)
ψ
(
m
)
l
(
s
)
ds,
(22)
for all k, l
= 1, 2, , n,andi = 1, 2, , m. The system of
(20) is identical to (A.8)ofTheorem 3 of Appendix A.3.
EURASIP Journal on Advances in Signal Processing 5
Algorithm 1. The coefficients c and d satisfying the system of
(20)aregivenby
c
= M
−1

I − F

F

M
−1
F


−1
F

M
−1

q,
d
=

F

M
−1
F

−1
F

M
−1
q,
(23)
with M
= G + nλI.
Proof. The exact form of the coefficients above is derived as
part of the results of Algorithm 6 (see Appendix A.3). The
latter algorithm also shows how to evaluate the coefficients c
and d based on the QR decomposition of the matrix F.

3.2. Recovery in S
1
and S
2
. In this section we provide
detailed algorithms for reconstruction of stimuli in S
1
and
S
2
, respectively, encoded with LIF neurons with random
thresholds. In the explicit form given, the algorithms can be
readily implemented.
3.2.1. Recovery of S
1
-Stimuli Encoded with a LIF Neuron
w ith Random Threshold. The stimuli u in this section are
elements of the Sobolev space S
1
. Thus, stimuli are modeled
as absolutely continuous functions on [0, 1] whose derivative
can be defined in a weak sense. The Sobolev space S
1
endowed with the inner-product
u, v=u
(
0
)
v
(

0
)
+

1
0
u

(
s
)
v

(
s
)
ds (24)
is a RKHS with reproducing kernel given by (see also (4))
K
(
t, s
)
= 1+

1
0
1
(
s>τ
)

·1
(
t>τ
)
dτ = 1+min
(
t, s
)
.
(25)
The sampling functions φ
k
(t), k = 1, 2, , n,givenby(10),
amount to
φ
k
(
t
)
RC
= 1 −exp


t
k+1
−t
k
RC

+


1 − exp


t
k+1
−t
k
RC

t ·1
(
t ≤ t
k
)
+

t −RC exp


t
k+1
−t
RC

+
(
RC −t
k
)

exp


t
k+1
−t
k
RC

·
1
(
t
k
<t≤ t
k+1
)
+

t
k+1
−t
k
exp


t
k+1
−t
k

RC


RC

1−exp


t
k+1
−t
k
RC

·
1
(
t
k+1
<t
)
.
(26)
The representation functions ψ
k
(t) are given, as before, by
ψ
k
(
t

)
=ψ
k
, K
t
=φ
k
, P
1
K
t

=
L
k
K
t
−L
k
K
0
t
= φ
k
(
t
)
−RC

1 − exp



t
k+1
−t
k
RC

,
(27)
for all k
= 1, 2, ,n. For the entries of G and F from (22)
and (24) we have that
[
G
]
kl
(
RC
)
2
=


























































1 − exp


t
l+1
−t
l
RC

×

t

k+1
−RC−
(
t
k
−RC
)
exp


t
k+1
−t
k
RC

, l<k,
t
k+1

3RC
2
−2
(
t
k
−RC
)
exp



t
k+1
−t
k
RC

+

t
k

RC
2

exp


t
k+1
−t
k
RC/2

, l = k,

1 − exp


t

k+1
−t
k
RC

×

t
l+1
−RC−
(
t
l
−RC
)
exp


t
l+1
−t
l
RC

, l>k,
[
F
]
k1
= RC


1 − exp


t
k+1
−t
k
RC

,
(28)
for all k
= 1, 2, , n,andalll = 1, 2, , n.
Algorithm 2. The minimizer
u ∈ S
1
is given by (18)where
(i) the coefficients d and c are given by (23) with the
elements of the matrices G and F specified by (28)
and,
(ii) the representation functions (ψ
k
), k = 1, 2, , n,are
given by (27)and(26).
Remark 2. If the S
1
-stimuli are encoded with an ideal IAF
neuron with random threshold, the quantities of interest for
implementing the reconstruction Algorithm 2 are given by

φ
k
(
t
)
=



















t
k+1
−t
k
+

(
t
k+1
−t
k
)
t, t
≤ t
k
,
t
k+1
−t
k

t
2
2
+ t
k+1
t −
t
2
k
2
, t
k
<t≤ t
k+1
,

t
k+1
−t
k
+
t
2
k+1
−t
2
k
2
, t
k+1
<t,
ψ
k
(
t
)
= φ
k
(
t
)

(
t
k+1
−t

k
)
,
[
G
]
kl
=





















1

2

t
2
l+1
−t
2
l

(
t
k+1
−t
k
)
, l<k,
1
3
(
t
k+1
−t
k
)
2
(
t
k+1
+2t
k

)
, l
= k
1
2

t
2
k+1
−t
2
k

(
t
l+1
−t
l
)
, l>k,
,
[
F
]
k1
= t
k+1
−t
k
,

(29)
6 EURASIP Journal on Advances in Signal Processing
for all k
= 1, 2, , n,andalll = 1, 2, , n. Note that the
above quantities can also be obtained by taking the limits of
(8), (26), (27), (28) when R
→∞.
3.2.2. Recovery of S
2
-Stimuli Encoded with a LIF Neuron with
Random Threshold. In this section stimuli u belong to the
Sobolev space S
2
, that is, the space of signals with absolutely
continuous first-order derivatives. Endowed with the inner-
product
u, v=u
(
0
)
v
(
0
)
+ u

(
0
)
v


(
0
)
+

1
0
u

(
s
)
v

(
s
)
ds, (30)
S
2
is a RKHS with reproducing kernel
K
(
s, t
)
= 1+ts +

min(s,t)
0

(
s
−τ
)(
t −τ
)

= 1+ts +
1
2
min
(
s, t
)
2
max
(
s, t
)

1
6
min
(
s, t
)
3
.
(31)
The sampling functions φ

k
, k = 1,2, , n,aregivenby(10)
and are equal to
e
t
k+1
/RC
φ
k
(
t
)
= g
k
(
t
)
+

t
2
f
1
(
t
k+1
)
− f
1
(

t
k
)
2
−t
3
f
0
(
t
k+1
)
− f
0
(
t
k
)
6

·
1
(
t ≤ t
k
)
+

t
2

f
1
(
t
k+1
)
− f
1
(
t
)
2
−t
3
f
0
(
t
k+1
)
− f
0
(
t
)
6
+t
f
2
(

t
)
− f
2
(
t
k
)
2

f
3
(
t
)
− f
3
(
t
k
)
6

·
1
(
t
k
<t≤ t
k+1

)
+

t
f
2
(
t
k+1
)
− f
2
(
t
k
)
2

f
3
(
t
k+1
)
− f
3
(
t
k
)

6

·
1
(
t
k+1
<t
)
,
(32)
where the functions f
0
, f
1
, f
2
, f
3
: T → R are of the form
f
0
(
x
)
= RC exp

x
RC


,
f
1
(
x
)
= RC
(
x −RC
)
exp

x
RC

,
f
2
(
x
)
= RC

(
RC
)
2
+
(
x −RC

)
2

exp

x
RC

,
f
3
(
x
)
= RC

(
x
−RC
)
3
+
(
RC
)
2
(
3x
−5RC
)


exp

x
RC

,
g
k
(
t
)
= f
0
(
t
k+1
)
− f
0
(
t
k
)
+ t

f
1
(
t

k+1
)
− f
1
(
t
k
)

.
(33)
Note that for each i, i
= 0, 1,2, 3,
f
i
(
x
)
=

1
0
x
i
exp

x
RC

dx. (34)

The representation functions are equal to
ψ
k
(
t
)
= φ
k
(
t
)
−e
−t
k+1
/RC
g
k
(
t
)
, (35)
and the entries of F are given by
[
F
]
k1
= e
−t
k+1
/RC


f
0
(
t
k+1
)
− f
0
(
t
k
)

,
[
F
]
k2
= e
−t
k+1
/RC

f
1
(
t
k+1
)

− f
1
(
t
k
)

.
(36)
Finally, the entries of G can also be computed in closed form.
To evaluate them note that ψ
k
(0) = ψ

k
(0) = 0, for all k, k =
1, 2, , n. Therefore
[
G
]
kl
=ψ
k
, ψ
l
=

1
0
ψ


k
(
s
)
ψ

l
(
s
)
ds,
ψ

k
(
t
)
RC
=




































t
k+1
−RC−
(
t

k
−RC
)
exp


t
k+1
−t
k
RC


t

1 − exp


t
k+1
−t
k
RC

, t ≤ t
k
,
t
k+1
−t−RC


1−exp


t
k+1
−t
RC

, t
k
<t≤ t
k+1
,
0, t
k+1
<t.
(37)
Denoting by
y
k
= 1 −exp


t
k+1
−t
k
RC


,
z
k
= t
k+1
−RC −
(
t
k
−RC
)
exp


t
k+1
−t
k
RC

,
(38)
the entries of the G matrix amount to
[
G
]
kl
=

1

3
t
3
k
y
k
y
l

1
2
t
2
k

y
k
z
l
+ y
l
z
k

+ t
k
z
k
z
l

+ z
k

(
t
k+1
−RC
)(
t
k+1
−t
k
)

t
2
k+1
−t
2
k
2
+
(
RC
)
2
y
k

+y

k

1
2
(
t
k+1
−RC
)

t
2
k+1
−t
2
k


1
3

t
3
k+1
−t
3
k

+
(

RC
)
2
z
k

·
1
(
k<l
)
EURASIP Journal on Advances in Signal Processing 7
+

1
3
t
3
k
y
2
k
−t
2
k
y
k
z
k
+ t

k
z
2
k
+
1
3
(
t
k+1
−t
k
)
3
−RC
(
t
k+1
−t
k
)
2
−2
(
RC
)
2
(
t
k+1

−t
k
)

1 − 2y
k

+
1
2
(
RC
)
3

1 − exp


t
k+1
−t
k
RC/2

·
1
(
k = l
)
+


1
3
t
3
l
y
l
y
k

1
2
t
2
l

y
l
z
k
+ y
k
z
l

+ t
l
z
l

z
k
+ z
l

(
t
l+1
−RC
)(
t
l+1
−t
l
)

t
2
l+1
−t
2
l
2
+
(
RC
)
2
y
l


+y
l

1
2
(
t
l+1
−RC
)

t
2
l+1
−t
2
l


1
3

t
3
l+1
−t
3
l


+
(
RC
)
2
z
l

·
1
(
k>l
)
.
(39)
Algorithm 3. The minimizer
u ∈ S
2
is given by (18)where
(i) the coefficients d and c are given by (23) with the
elements of the matrices G and F specified by (39)
and (36), respectively, and,
(ii) the representation functions (ψ
k
), k = 1, 2, , n,are
given by (35)and(32).
Remark 3. If S
2
-stimuli are encoded with an ideal IAF
neuron with random threshold, the quantities of interest in

implementing the reconstruction Algorithm 3 are given by
φ
k
(
t
)
= ψ
k
(
t
)
+ t
k+1
−t
k
+
t(t
2
k+1
−t
2
k
)
2
,
ψ
k
(
t
)

=























t
2
4

t
2

k+1
−t
2
k


t
3
6
(
t
k+1
−t
k
)
, t
≤ t
k
,
t
4
k
24

t
6
t
3
k
+

t
2
4
t
2
k+1

t
3
6
t
k+1
+
t
4
24
, t
k
<t≤ t
k+1
,

1
24

t
4
k+1
−t
4

k

+
t
6

t
3
k+1
−t
3
k

, t
k+1
<t,
[
G
]
kl
=





























t
3
l+1
−t
3
l

t
2
k+1
−t

2
k

12


t
4
l+1
−t
4
l

(
t
k+1
−t
k
)
24
, l<k,
1
4
(
t
k+1
−t
k
)
2


1
3
t
3
k
+t
k
t
2
k+1
+
1
5
(
t
k+1
−t
k
)
3

, l = k,

t
3
k+1
−t
3
k


t
2
l+1
−t
2
l

12


t
4
k+1
−t
4
k

(
t
l+1
−t
l
)
24
, l>k,
[
F
]
ki

=
t
i
k+1
−t
i
k
i
,
(40)
for all k
= 1, 2, , n,alll = 1, 2, , n,andalli = 1, 2. Note
that the above quantities can also be obtained by taking the
limits of (8), (32), (35), (36), (39) when R
→∞.
3.3. Examples. In this section we present two examples that
demonstrate the performance of the stimulus reconstruction
algorithms presented above. In the first example, a simplified
model of the temporal contrast derived from the photocur-
rent drives the spiking behavior of a LIF neuron with random
threshold. While the effective bandwidth of the temporal
contrast is typically unknown, the analog waveform is
absolutely continuous and the first-order derivative can be
safely assumed to be absolutely continuous as well.
In the second example, the stimulus is encoded by a
pair of nonlinear rectifier circuits each cascaded with a LIF
neuron. The rectifier circuits separate the positive and the
negative components of the stimulus. Both signal compo-
nents are assumed to be absolutely continuous. However, the
first-order derivatives of the component signals are no longer

absolutely continuous.
In both cases the encoding circuits are of specific
interest to computational neuroscience and neuromorphic
engineering. We argue that Sobolev spaces are a natural
choice for characterizing the stimuli that are of interest in
these applications and show that the algorithms perform well
and can essentially recover the stimulus in the presence of
noise.
3.3.1. Encoding of Temporal Contrast with a LIF Neuron.
A key signal in the visual system is the (positive) input
photocurrent. Nonlinear circuits of nonspiking neurons
in the retina extract the temporal contrast of the visual
field from the photocurrent. The temporal contrast is then
presented to the first level of spiking neurons, that is, the
retinal ganglion cells (RGCs) [17]. If I
= I(t) is the input
photocurrent, then a simplified model for the temporal
contrast u
= u(t) is given by the equation
u
(
t
)
=
d log
(
I
(
t
))

dt
=
1
I
(
t
)
dI
dt
. (41)
This model has been employed in the context of address
event representation (AER) circuits for silicon retina and
related hardware applications [18]. It is aboundingly clear
that even when the input bandwidth of the photocurrent I
is known, the efficient bandwidth of the actual input u to
the neuron cannot be analytically evaluated. However, the
somatic input is still a continuously differentiable function,
and it is natural to assume that it belongs to the Sobolev
spaces S
1
and S
2
. LIF neuron models have been used to fit
responses of RGCs neurons in the early visual system [19].
In our example the input photocurrent is assumed to
be a positive bandlimited function with bandwidth Ω
=
2π · 30 rads/s. The neuron is modeled as a LIF neuron with
random threshold. After each spike, the value of the neuron
threshold was picked from a Gaussian distribution N (δ, σ

2
).
The LIF neuron parameters were b
= 2.5, δ = 2.5, σ = 0.1,
C
= 0.01, and R = 40 (all nominal values). The neuron fired
a total of 108 spikes.
Figure 2(a) shows the optimal recovery in S
2
with
regularization parameter λ
= 1.3 ×10
−14
. Figure 2(b) shows
the Signal-to-Noise Ratio for various values of the smoothing
parameter λ in S
1
(blue line) and S
2
(green line). The red
8 EURASIP Journal on Advances in Signal Processing
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6

0.8
1
Amplitude
00.10.20.30.40.50.60.70.80.91
Time (s)
Original
Recovered S
2
(a)
0
2
4
6
8
10
12
14
16
SNR (dB)
10
−18
10
−16
10
−14
10
−12
10
−10
10

−8
10
−6
λ
S
1
S
2
TDM
SNR
ln
(b)
Figure 2: Recovery of temporal contrast encoded with a LIF. The stimulus and its first-order derivative are absolutely continuous.
line shows the SNR when the perfect recovery algorithm of
[1] with the sinc kernel K(s, t)
= sin(2Ω(t − s))/π(t − s),
(s, t)
∈ R
2
, is used (other choices of sinc kernel bandwidth
give similar or lower SNR). The cyan line represents the
threshold SNR defined as 10log
10
(δ/σ). Recovery in S
2
outperforms recovery in S
1
but gives satisfactory results for a
smaller range of the smoothing parameter. For a range of the
regularization parameter λ both reconstructions outperform

the performance of the recovery algorithm for bandlimited
stimuli based upon the sinc kernel [1]. Finally, the stimulus
recovery SNR is close to the threshold SNR.
3.3.2. Encoding the Stimulus Velocity with a Pair of LIF
Neurons. The stimulus is encoded by a pair of nonlinear
rectifier circuits each cascaded with a LIF neuron. The
rectifier circuits separate the positive and the negative
components of the stimulus. (see Figure 3). Such a clipping-
based encoding mechanism has been used for modeling the
direction selectivity of the H1 cell in the fly lobula plate [7].
Formally, the stimulus is decomposed into its positive
u
+
and negative u

components by the nonlinear clipping
mechanism:
u
+
(
t
)
= max
(
u
(
t
)
,0
)

,
u

(
t
)
=−min
(
u
(
t
)
,0
)
,
u
(
t
)
= u
+
(
t
)
−u

(
t
)
.

(42)
As an example, the input stimulus u is a bandlimited function
with bandwidth Ω
= 2π ·30 rad/s. After clipping, each signal
component is no longer a bandlimited or a differentiable
function. However it is still an absolutely continuous func-
tion and, therefore, an element of the Sobolev space S
1
.Each
component is encoded with two identical LIF neurons with
parameters b
= 1.6, δ = 1, R = 40, and C = 0.01 (all nominal
values). The thresholds of the two neurons are deterministic,
that is, there is no noise in the encoding circuit. Each neuron
produced 180 spikes.
By applying the recovery algorithm for S
1
-signals, the
two signal components are separately recovered. Finally, by
subtracting the recovered signal components, the original
stimulus is reconstructed. Figure 4 shows the recovered
version of the positive and negative signal components and
of the original stimulus. As it can be seen, both components
are very accurately recovered. Note that since the threshold is
deterministic, the regularization (or smoothing) parameter
λ is set to 0. The corresponding SNRs for the positive
component, negative component, and original stimulus were
27.3 dB, 27.7 dB and 34 dB, respectively.
4. Reconstruction of Stimuli Encoded
with a Population of LIF Neurons w ith

Random Thresholds
In this section we encode stimuli with a population of
leaky integrate-and-fire neurons with random thresholds.
As in Section 3, the stimuli are assumed to be elements of
a Sobolev space. We first derive the general reconstruction
algorithm. We then work out the reconstruction of stimuli
that are absolutely continuous and stimuli that have an
absolutely continuous first-order derivative. Examples of the
reconstruction algorithm are given at the end of this section.
4.1. Reconstruction of Stimuli in Sobolev Spaces. Let u
= u(t),
t
∈ T , be a stimulus in the Sobolev space S
m
, m ∈ N

.An
optimal estimate of
u of u is obtained by minimizing the cost
functional
1
n
N

j=1
n
j

k=1



q
j
k
−φ
j
k
, u
C
j
σ
j


2
+ λP
1
u
2
, (43)
where n =

N
j=1
n
j
and P
1
: S
m

→ H
1
is the projection of
the Sobolev space S
m
to H
1
. In what follows q denotes the
column vector q
= [(1/(C
1
σ
1
))q
1
; ;(1/(C
N
σ
N
))q
N
]with
[q
j
]
k
= q
j
k
,forallj = 1, 2, , N,andallk = 1, 2, , n

j
.We
have the following result.
EURASIP Journal on Advances in Signal Processing 9
u(t)
u
+
(t)
u

(t)
R
1
C
1
R
2
C
2
δ
1
δ
2
(t
1
k
)
(t
2
k

)
Recovery
algorithm
Recovery
algorithm
u
+
(t)
u(t)
u

(t)
Spike triggered reset
Figure 3: Circuit for encoding of stimuli velocity.
−0.2
0
0.2
0.4
0.6
0.8
1
1.2
00.20.40.60.81
Time (s)
Positive component
(a)
0
0.5
1
1.5

00.20.40.60.81
Time (s)
Negative component
(b)
−1.5
−1
−0.5
0
0.5
1
1.5
00.20.40.60.81
Time (s)
To t a l
(c)
Figure 4: Encoding the stimulus velocity with a pair of rectifier LIF Neurons. (a) Positive signal component. (b) Negative signal component.
(c) Reconstructed stimulus.
Theorem 2. Assume that the stimulus u = u(t), t ∈ [0, 1]
is encoded into a time sequence (t
j
k
), j = 1,2, , N, k =
1, 2, , n
j
, with a population of LIF neurons with random
thresholds that is fully described by (13). The optimal estimate
u of u is given by
u =
m


i=1
d
i
χ
i
+
N

j=1
1
C
j
σ
j
n
j

k=1
c
j
k
ψ
j
k
, (44)
where
χ
i
(
t

)
=
t
i−1
(
i
−1
)
!
,
ψ
j
k
(
t
)
=

t
j
k+1
t
j
k
K
1
(
t, s
)
exp




t
j
k+1
−s
R
j
C
j


ds.
(45)
The coefficients vectors c
= [c
1
; ; c
N
] with [c
j
]
k
= c
j
k
, for all
j
= 1,2, , N,andallk = 1, 2, , n

j
,and[d]
i
= d
i
,forall
i
= 1, 2, , m, satisfy the matr ix equations


G + λ
N

j=1
n
j
·I


c + Fd = q,
F

c = 0,
(46)
where G is a block square matrix defined as
G
=










1
(
C
1
σ
1
)
2
G
11

1
C
1
σ
1
C
N
σ
N
G
1N
.
.

.
.
.
.
.
.
.
1
C
N
σ
N
C
1
σ
1
G
N1

1
(
C
N
σ
N
)
2
G
NN










, (47)
w ith [G
ij
]
kl
=ψ
i
k
, ψ
j
l
, for all i, j = 1, , N, all k = 1, , n
i
,
and all l
= 1, ,n
j
.Finally,F is a block matrix defined as F =
[(1/(C
1
σ
1

))F
1
; ;(1/(C
N
σ
N
))F
N
] with [F
j
]
ki
=φ
j
k
, χ
i
,for
all j
= 1, 2, , N,allk = 1, 2, , n
j
, and all i = 1, 2, ,m.
Proof. The noise terms
q
j
k
−φ
j
k
, u (48)

that appear in the cost functional (43) are independent
Gaussian random variables with zero mean and variance
(C
j
σ
j
)
2
. Therefore, by normalizing the t-transform of each
neuron with the noise standard deviation C
j
σ
j
, these
random variables become i.i.d. with unit variance. After
normalization, the linear functionals in (8)canbewritten
as
L
j
k
u =

t
j
k+1
t
j
k
1
C

j
σ
j
u
(
s
)
exp



t
j
k+1
−s
R
j
C
j


ds. (49)
This normalization causes a normalization in the sampling
and reconstruction functions φ
j
k
and ψ
j
k
as well as in the

10 EURASIP Journal on Advances in Signal Processing
entries of F.Wehave

F
j

ki
=
1
C
j
σ
j

t
j
k+1
t
j
k
χ
i
(
s
)
exp



t

j
k+1
−s
R
j
C
j


ds, (50)
for all i
= 1, 2, , m,allk = 1, 2, , n
j
,andall
j
= 1,2, , N. The rest of the proof follows from
Theorem 3.
4.2. Recovery in S
1
and S
2
. In this section we provide
detailed algorithms for reconstruction of stimuli in S
1
and
S
2
, respectively, encoded with a population of LIF neurons
with random thresholds. As in Section 3.2, the algorithms
provided can be readily implemented.

4.2.1. Recovery of S
1
-Stimuli Encoded with a Population of
LIF Neurons with Random Thresholds. Let u be an absolutely
continuous signal in T , that is, u
∈ S
1
.Wehavethe
following.
Algorithm 4. The minimizer
u ∈ S
1
is given by (44)and
(i) the coefficients d and c are given by (23)with
the elements of the matrices G and F specified in
Theorem 2 and,
(ii) the representation functions (ψ
j
k
), k = 1, 2, , n
j
,
and j
= 1, 2, ,N, are essentially given by (27)and
(26) (plus an added superscript j).
Remark 4. If S
1
-stimuli are encoded with a population of
ideal IAF neurons with random thresholds, then the entries
of the matrix G can be computed analytically. We have


G
ij

kl
=

1
2

τ
2
l+1
−τ
2
l

(
τ
k+1
−τ
k
)

·
1
(
τ
l+1


k
)
+

1
2

τ
2
k
−τ
2
l

(
τ
k+1
−τ
k
)
+

τ
2
l+1
−τ
2
k

(

τ
k+1
−τ
l+1
)

+
1
3

τ
3
l+1
−τ
3
k


τ
2
k
(
τ
l+1
−τ
k
)

·
1

(
τ
l
≤ τ
k
≤ τ
l+1
≤ τ
k+1
)
+


1
6

τ
3
k+1
−τ
3
k

+
1
2
τ
l+1

τ

2
k+1
−τ
2
k


1
2
τ
2
l
(
τ
k+1
−τ
k
)

·
1
(
τ
l
≤ τ
k

k+1
≤ τ
l+1

)
+


1
6

τ
3
l+1
−τ
3
l

+
1
2
τ
k+1

τ
2
l+1
−τ
2
l


1
2

τ
2
k
(
τ
l+1
−τ
l
)

·
1
(
τ
k
≤ τ
l

l+1
≤ τ
k+1
)
+

1
2

τ
2
l

−τ
2
k

(
τ
l+1
−τ
l
)
+

τ
2
k+1
−τ
2
l

(
τ
l+1
−τ
k+1
)

+
1
3


τ
3
k+1
−τ
3
l


τ
2
l
(
τ
k+1
−τ
l
)

·
1
(
τ
k
≤ τ
l
≤ τ
k+1
≤ τ
l+1
)

+

1
2

τ
2
k+1
−τ
2
k

(
τ
l+1
−τ
l
)

·
1
(
τ
k+1

l
)
,
(51)
Table 1: Nominal values of the neuron parameters (δ represents the

mean value of the threshold value).
Neuron1234
b 0.92 0.79 1.15 1.19
δ 2.94 2.61 2.76 2.91
R 31.9 25.2 32.1 34.2
C 0.01 0.01 0.01 0.01
where τ
k
= t
i
k
, τ
k+1
= t
i
k+1
, τ
l
= t
j
l
, τ
l+1
= t
j
l+1
,foralli, j =
1, , N,allk = 1, , n
i
,andalll = 1, , n

j
. The analytical
evaluation of the entries of the matrix F is straightforward.
4.2.2. Recovery of S
2
-Stimuli Encoded with a Population of
LIF Neurons with Random Thresholds. Let u be a signal with
absolutely continuous first-order derivative in T , that is,
u
∈ S
2
. We have the following.
Algorithm 5. The minimizer
u ∈ S
2
is given by (44)and
(i) the coefficients d and c are given by (23)with
the elements of the matrices G and F specified in
Theorem 2 and,
(ii) the representation functions (ψ
j
k
), k = 1, 2, , n
j
,
and j
= 1, 2, ,N, are essentially given by (35)and
(32)(plusanaddedsuperscript j).
4.3. Examples. In this section we present two examples
that demonstrate the performance of the reconstruction

algorithms for stimuli encoded with a population of neurons
as presented above. In both cases the encoding circuits
are of specific interest to neuromorphic engineering and
computational neuroscience. The first example presented
in Section 4.3.1 shows the results of recovery of the tem-
poral contrast encoded with a population of LIF neurons
with random thresholds. Note that in this example the
stimulus is in S
2
and therefore also in S
1
.Stimulus
reconstruction as a function of threshold variability and
the smoothing parameter are demonstrated. In the example
in Section 4.3.2, the stimulus is encoded using, as in
Section 3.3.2, a rectifier circuit and a population of neurons.
Here the recovery can be obtained in S
1
only. As expected,
recovery improves as the size of the population grows
larger.
4.3.1. Encoding of Temporal Contrast with a Population of LIF
Neurons. We examine the encoding of the temporal contrast
with a population of LIF neurons. In particular, the temporal
contrast input u was fed into a population of 4 LIF neurons
with nominal parameters given in Ta bl e 1.
In each simulation, each neuron had a random threshold
with standard deviation σ
j
for all j = 1, 2, 3, 4. Simulations

were run for multiple values of δ
j

j
in the range [5, 100],
and the recovered versions were computed in both S
1
and
S
2
spaces for multiple values of the smoothing parameter
λ. Figure 5 shows the SNR of the recovered stimuli in S
1
and S
2
.
EURASIP Journal on Advances in Signal Processing 11
SNR S
1
−60
−40
−20
0
20
SNR (dB)
10
−10
λ
20
40

60
80
100
δ/σ
−50
−40
−30
−20
−10
0
10
(a)
SNR S
2
−60
−40
−20
0
20
SNR (dB)
10
0
10
−10
λ
20
40
60
80
100

δ/σ
−50
−40
−30
−20
−10
0
10
(b)
Figure 5: Signal-to-Noise Ratio for different noise threshold levels and different values of the smoothing parameter λ.Thex-axis represents
the threshold-to-noise ratio δ/σ. (a) SNR for recovery in S
1
. (b) SNR for recovery in S
2
.
−15
−10
−5
0
5
10
15
20
SNR (dB)
0 102030405060708090100
δ/σ
S
1
S
2

Maximum SNR for different noise levels
(a)
10
−9
10
−8
10
−7
10
−6
10
−5
10
−4
10
−3
Optimum λ
10 20 30 40 50 60 70 80 90 100
δ/σ
S
1
S
2
Optimum λ for different noise levels
(b)
Figure 6: (a) Maximum SNR over all possible values of the smoothing parameter λ for a fixed noise level δ/σ. (b) Optimal value of the
parameter λ for which the recovered stimuli attain the maximum SNR. Blue line for S
1
and green line for S
2

.
Figure 6 examines how the maximum SNR and the
optimal value for the smoothing parameter that attains this
maximum depend on the noise level. From Figures 5 and 6
we note that the
(i) recovery in S
2
gives in general better results than
recovery in S
1
.Thisisexpectedsinceu ∈ S
2
⊂ S
1
;
(ii) the optimal value of the smoothing parameter is
largely independent of the noise level. This is due
to the averaging in the cost functional across the
population of neurons;
(iii) The encoding mechanism is very sensitive to the
variability of the random threshold. In general if
the threshold-to-noise ratio δ/σ is below 15, then
accurate recovery is not possible (SNR < 5dB).
4.3.2. Velocity Encoding with a Population of Rectifier LIF
Neurons. This example is a continuation of the example
presented in Section 3.3.2. The positive and negative com-
ponents of the stimulus are each fed into a population of 8
LIF neurons with random thresholds. The nominal values
of the neuron parameters and the number of spikes that
each neuron fired are given in Tabl e 2. Using the same

stimulus, the simulation was repeated one hundred times. In
Figure 7 an example of the recovered positive and negative
clipped signal components are shown each encoded with
1, 2, 4, and 8 neurons. The clipped signal components
are elements of the Sobolev space S
1
but not S
2
.The
difference between the recovered components approximates
the original stimulus (third column). The three columns
correspond to the recovery of the positive and of the negative
12 EURASIP Journal on Advances in Signal Processing
Table 2: Nominal values of the neuron parameters and the number of spikes fired. For each neuron we also had C
i
+
= C
i

= 0.01 and
σ
i
+
= δ
i
+
/20 and σ
i

= δ

i

/20 for all i = 1,2, ,8.
Neuron12345678
b
+
0.14 0.25 0.15 0.28 0.15 0.25 0.14 0.16
b

0.12 0.22 0.24 0.21 0.19 0.23 0.23 0.24
δ
+
2.03 2.35 1.61 2.11 1.64 1.52 2.01 1.85
δ

1.86 2.1 2.18 1.75 2.06 1.81 2.24 2.23
R
+
35 42 42 41 47 35 26 32
R

49 43 40 43 41 43 41 44
Spikes+1922252625351922
Spikes
− 19 23 22 26 21 27 21 22
Positive component
−1
0
1
2

Number of
neurons: 8
00.51
Time (s)
−1
0
1
2
Number of
neurons: 4
00.51
Time (s)
−1
0
1
2
Number of
neurons: 2
00.51
Time (s)
−1
0
1
2
Number of
neurons: 1
00.51
Time (s)
(a)
Negative component

−1
0
1
2
Number of
neurons: 8
00.51
−1
0
1
2
Number of
neurons: 4
00.51
−1
0
1
2
Number of
neurons: 2
00.51
−1
0
1
2
Number of
neurons: 1
00.51
Time (s)
Time (s)

Time (s)
Time (s)
(b)
To t a l
−2
0
2
Number of
neurons: 8
00.51
Time (s)
−2
0
2
Number of
neurons: 4
00.51
Time (s)
−2
0
2
Number of
neurons: 2
00.51
Time (s)
−2
0
2
Number of
neurons: 1

00.51
Time (s)
(c)
Figure 7: Recovery of absolutely continuous stimuli encoded with a population of LIF neurons with random thresholds.
components, and the total stimulus, respectively. The four
rows show the recovery when 1, 2, 4, and 8 encoding neurons
are, respectively, used. Blue lines correspond to the original
stimuli and green to the recovered ones. It can be seen that
the recovery improves when more neurons are used. This
can also be seen from Figure 8 where the corresponding
mean value SNRs are plotted. The error bars in the same
figure correspond to the standard deviation of the associated
SNR.
5. Conclusions
In this paper we presented a general approach to the
reconstruction of sensory stimuli encoded with LIF neurons
with random thresholds. We worked out in detail the
reconstruction of stimuli modeled as elements of Sobolev
spaces with absolutely continuous, and with absolutely
continuous first-order derivatives. Clearly the approach
advocated here is rather general, and the same formalism
can be applied to other Sobolev spaces or other RKHSs.
Finally, we note that the recovery methodology employed
here also applies to stimuli encoded with a population of LIF
neurons.
We extensively discussed the stimulus reconstruction
results for Sobolev spaces and gave detailed examples in the
hope that practicing systems neuroscientists will find them
easy to apply or will readily adapt them to other models of
sensory stimuli and thus to other RKHSs of interest. The

work presented here can also be applied to statistical learning
in neuroscience. This and other closely related topics will be
presented elsewhere.
EURASIP Journal on Advances in Signal Processing 13
0
2
4
6
8
10
12
14
SNR (dB)
12345678
Number of neurons
Positive
Negative
To t a l
Figure 8: SNR for the positive (blue), negative (green), and total
stimulus (red) as a function of the number of encoding neurons.
Appendix
A. Theory of RKHS
A.1.ElementsofReproducingKernelHilbertSpaces.
Definition 1. A Hilbert space H of functions defined on a
domain T associated with the inner-product
·, · : H ×
H → R is called a Reproducing Kernel Hilbert Space (RKHS)
if for each t
∈ T the evaluation functional E
t

: H → R with
E
t
u = u(t), u ∈ H, t ∈ T , is a bounded linear functional.
From the Riesz representation theorem (see Section A.2), for
every t
∈ T and every u ∈ H there exists a function K
t
∈ H
such that
K
t
, u=u
(
t
)
. (A.1)
The above equality is known as the reproducing property
[15].
Definition 2. AfunctionK : T
× T → R is a reproducing
kernel of the RKHS H if and only if
(1) K(
·, t) ∈ H,forallt ∈ T ,
(2)
u, K(·, t)=u(t), for all t ∈ T and u ∈ H.
From the above definition it is clear that K(s, t)
=

K(·, s), K(·, t). Moreover, it is easy to show that every

RKHS has a unique reproducing kernel [15].
A.2. Riesz Representation Theorem. Here we state the Riesz
Lemma, also known as the Riesz Representation Theorem.
Lemma 3. Let H be a Hilber t space and let L : H
→ R be
a c ontinuous (bounded) linear functional. Then there exists a
unique element v
∈ H such that
Lu
=v, u,(A.2)
for all u
∈ H.
Proof. Theproofcanbefoundin[20]. Note that if H is a
RKHS with reproducing kernel K, then the unique element
can be easily found since
v
(
t
)
=v, K
t
=LK
t
. (A.3)
A.3. Smoothing Splines in Sobolev Spaces. Suppose that a
receiver reads the following measurements
q
k
=φ
k

, u + ε
k
,(A.4)
where φ
k
∈ S
m
and ε
i
are i.i.d. gaussian random variables,
with zero mean and variance 1, for all k
= 1,2, , n.An
optimal estimate
u of u minimizes the cost functional
1
n
n

k=1

q
k
−φ
k
, u

2
+ λP
1
u

2
,(A.5)
where P
1
: S
m
→ H
1
is the projection of the Sobolev space
S
m
to H
1
. Intuitively, the nonnegative parameter λ regulates
the choice of the estimate
u between faithfulness to data
fitting (λ small) and maximum smoothness of the recovered
signal (λ large). We have the following theorem.
Theorem 3. The minimizer
u of (A.5) is given by
u =
m

i=1
d
i
χ
i
+
n


k=1
c
k
ψ
k
,(A.6)
where
χ
i
(
t
)
=
t
i−1
(
i
−1
)
!
,
ψ
k
= P
1
φ
k
.
(A.7)

Furthermore, the optimal coefficients [c]
k
= c
k
and [d]
i
= d
i
satisfy the matrix equations
(
G + nλI
)
c + Fd
= q,
F

c = 0,
(A.8)
where [G]
kl
=ψ
k
, ψ
l
, [F]
ki
=φ
k
, χ
i

, and [q]
k
= q
k
,forall
k, l
= 1, 2, , n,andi = 1, 2, , m.
Proof. We provide a sketch of the proof for completeness.
A detailed proof appears in [10]. The minimizer can be
expressed as
u =
m

i=1
d
i
χ
i
+
n

k=1
c
k
ψ
k
+ ρ,(A.9)
where ρ
∈ S
m

is orthogonal to χ
1
, , χ
m
, ψ
1
, , ψ
n
. Then
the cost functional defined in (A.5)becomes
1
n


q −
(
Gc + Fd
)


2
+ λ

c

Gc +


ρ



2

, (A.10)
and thus ρ
= 0. By differentiating with respect to c, d we get
the system of (A.8).
14 EURASIP Journal on Advances in Signal Processing
Algorithm 6. The optimal coefficients c and d are given by
c
= M
−1

I − F

F

M
−1
F

−1
F

M
−1

q,
d
=


F

M
−1
F

−1
F

M
−1
q,
(A.11)
with M
= G + nλI. Alternatively,
c
= Q
2

Q
2

MQ
2

−1
Q
2


q,
d
= R
−1
Q
1


q − Mc

,
(A.12)
where F
= (Q
1
: Q
2
)

R
0

is the QR decomposition of F, Q
1
is n ×m, Q
2
is n ×(n −m), Q = (Q
1
: Q
2

) is orthogonal, and
R is an m
×m upper triangular matrix.
Proof. Equations (A.11) come from the minimization of
(A.10)withrespecttoc and d.For(A.12), note that since
F

c = 0 it must be that Q
1

c = 0. Since Q is orthogonal,
c
= Q
2
γ for some (n − m)-dimensional vector γ. Equations
(A.12) follow easily by substituting in the first equation in
(A.11) and multiplying with Q
2

.
Remark 5. The two formulas for the coefficients (A.11)and
(A.12) give exactly the same results. According to [10] the
formulas given by (A.12) are more suitable for numerical
work than those of (A.11). Note however, that when m
= 1,
the matrix F becomes a vector and (A.11) can be simplified
since the term F

M
−1

F becomes a scalar.
Acknowledgments
This work was supported by NIH Grant R01 DC008701-
01 and NSF Grant CCF-06-35252. E. A. Pnevmatikakis was
also supported by the Onassis Public Benefit Foundation.
The authors would like to thank the reviewers for their
suggestions for improving the presentation of this paper.
References
[1] A. A. Lazar, “Multichannel time encoding with integrate-and-
fire neurons,” Neurocomputing, vol. 65-66, pp. 401–407, 2005.
[2] A. A. Lazar and L. T. T
´
oth, “Perfect recovery and sensitivity
analysis of time encoded bandlimited signals,” IEEE Transac-
tions on Circuits and Systems I, vol. 51, no. 10, pp. 2060–2073,
2004.
[3] A. A. Lazar and E. A. Pnevmatikakis, “Faithful representation
of stimuli with a population of integrate-and-fire neurons,”
Neural Computation, vol. 20, no. 11, pp. 2715–2744, 2008.
[4] A. A. Lazar and E. A. Pnevmatikakis, “A video time encoding
machine,” in Proceedings of the 15th IEEE International
Conference on Image Processing (ICIP ’08), pp. 717–720, San
Diego, Calif, USA, October 2008.
[5] P. N. Steinmetz, A. Manwani, and C. Koch, “Variability and
coding efficiency of noisy neural spike encoders,” BioSystems,
vol. 62, no. 1–3, pp. 87–97, 2001.
[6] G. Gestri, H. A. K. Mastebroek, and W. H. Zaagman,
“Stochastic constancy, variability and adaptation of spike
generation: performance of a giant neuron in the visual system
of the fly,” Biological Cybernetics, vol. 38, no. 1, pp. 31–40,

1980.
[7] F. Gabbiani and C. Koch, “Coding of time-varying signals
in spike trains of integrate-and-fire neurons with random
threshold,” Neural Computation, vol. 8, no. 1, pp. 44–66, 1996.
[8] A. A. Lazar and E. A. Pnevmatikakis, “Consistent recovery of
stimuli encoded with a neural ensemble,” in Proceedings of
IEEE International Conference on Acoustics, Speech, and Signal
Processing (ICASSP ’09), pp. 3497–3500, Taipei, Taiwan, April
2009.
[9] A. Berlinet and C. Thomas-Agnan, Reproducing Kernel Hilbert
Spaces in Probability and Statistics, Kluwer Academic Publish-
ers, Dordrecht, The Netherlands, 2004.
[10] G. Wahba, Spline Models for Observational Data, SIAM,
Philadelphia, Pa, USA, 1990.
[11] V. N. Vapnik, Statisitical Learning Theory, Wiley-Interscience,
New York, NY, USA, 1998.
[12] A. R. C. Paiva, I. Park, and J. C. Pr
´
ıncipe, “A reproducing ker-
nel hilbert space framework for spike train signal processing,”
Neural Computation, vol. 21, no. 2, pp. 424–449, 2009.
[13] I. Dimatteo, C. R. Genovese, and R. E. Kass, “Bayesian curve-
fitting with free-knot splines,” Biometrika,vol.88,no.4,pp.
1055–1071, 2001.
[14] R. E. Kass and V. Ventura, “A spike-train probability model,”
Neural Computation, vol. 13, no. 8, pp. 1713–1720, 2001.
[15] N. Aronszajn, “Theory of reproducing kernels,” Transactions of
the American Mathematical Society, vol. 68, no. 3, pp. 337–404,
1950.
[16] R. A. Adams, Sobolev Spaces, Academic Press, New York, NY,

USA, 1975.
[17] P. Dayan and L. F. Abbott, Theoretical Neurosc ience: Compu-
tational and Mathematical Modeling of Neural Systems, MIT
Press, Cambridge, Mass, USA, 2001.
[18] P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128
×128 120 dB
15 μs latency asynchronous temporal contrast vision sensor,”
IEEE Journal of Solid-State Circuits, vol. 43, no. 2, pp. 566–576,
2008.
[19] J. W. Pillow, L. Paninski, V. J. Uzzell, E. P. Simoncelli, and E. J.
Chichilnisky, “Prediction and decoding of retinal ganglion cell
responses with a probabilistic spiking model,” The Journal of
Neuroscience, vol. 25, no. 47, pp. 11003–11013, 2005.
[20] M. Reed and B. Simon, Methods of Modern Mathematical
Physics. Vol. 1: Functional Analysis, vol. 1, Academic Press, New
York, NY, USA, 1980.

×