l neory oj me action potential 11U includes active dendritic spines (Shepherd et al. 1985). It has been postulated that enhanced propagation of voltage changes induced by synaptic input can occur because spine-spine excitations occur in a way similar to the saltatory conduction of spikes in a myelinated nerve. Propagation in nerve bundles When several nerve axons form a nerve tract, there is the possibility of electrical interactions between individual fibers through the extracellular space. This was demonstrated by Katz and Schmidt (1940) who found that when a pulse in one fiber was followed by a faster pulse in an adjacent fiber, the second pulse would catch up with the first and the two pulses would travel together in a locked-in fashion. There have been a few theoretical investigations of such interactions. Scott and Luzader (1979) and Eilbeck, Luzadcr, and Scott (1981) employed the coupled Fitzbugh-Nagumo-type equations »1.1-0-~a)"l,lal-aBl.Xx+f{0l)~wi> (8.217) wl-b(v1-yw1), (8.218) "2.,~ (1 -«)- »i,„ +f(»i) ~ «!. (8-219) v^.-bfa-ywj, (8.220) where a 1 is a coupling parameter. The solutions of the traveling-wave equations were expanded as a perturbation series in powers of a and an expression obtained for the first-order term in the correction to the speed of propagation when /(•) was piecewise linear. Bell (1981) has considered a pair of coupled Hodgkin-Huxley axons and provided conditions for the existence of a traveling-pulse solution on both fibers. A result was also obtained in the case of n > 2 fibers and for other dynamical equations representing the behavior of active membrane (Bell and Cook 1978,1979). The stochastic activity of neurons . 9.1 Introduction AU of the models of nerve-cell activity we have considered thus far have been deterministic. They have consisted of a differential equation (with a threshold condition imposed) or a system of differential equations in conjunction with a given input current and given boundary-initial-value conditions. To illustrate, consider the linear Lapicque model of Chapter 3. If V(t) is the depolarization at time t, C is the neuron capacitance, R is the neuron resistance, and /(<) is the input current, then CdV/dt+V/R-I(t), t>0, V<$, V{0) = 0, (9.1) with a spike generated when V reaches or exceeds 3. If ](t) is constant, then the predicted time between action potentials is always the same and is given by (3.52). If J(t) —/(, +J,cos( 0 takes on non-negative integer values and has the probability law e-x\" A-0,1,2, (9.4) For both of these random variables the total probability mass is unity £j»*-l. (9-5) k Continuous random variables take on a continuum of values. For any random variable the distribution function is F(x) = Pi[X^x}. (9.6) Usually the probability law of a continuous random variable can be expressed through its probability density p(x), which is the derivative of its distribution function. Then, roughly speaking, p(x)dx is the probability that the random variable takes on values in (x, x + dx\. In what follows two kinds of continuous random variables are important. One is the normal, or Gaussian, random variable with density i -exp (*-/02 2a2 — 00 < X < 00, (9.7) where p. and a2 are constants. The other has a gamma density p(x) = (\/r(r))(\x)r-le-^, x>0, (9.8) where X > 0 and r > 0 are constants and V(r) = /" e'xr~1 dx is the gamma function. A special case is that of an exponentially distributed random variable with r = 1 so the density is p(x) - \e~k*. For continuous random variables the total probability is also unity fp(x)dx- 1, (9.9) where the range of integration is (—00,00) for a normal random variable and (0, oo) for a gamma variate. Mean, variance, and covariance The mean, or expected value, of an integer-valued random variable is k (9.10) This gives E[ X] = np for the binomial and E\X] = X for the Poisson random variable. For continuous random variables, E[X]=jxp(x)dx. (9.11) This gives E[X] — p. for the normal and E[X] = r/X for the gamma random variable. Note that expectation is a linear operation, so E[aX+bY] = aE[X] + bE[Y]. The second moment is E[X2] and the variance, which tells us how much scatter there is about the mean, is Var[JT] = e[(X-E[X])2] = E[X2]- E2[X]. (9.12) The variances of the random variables considered above are: binomial, npq; Poisson, \; normal, oz; and gamma, r/X1. Conditional probability and independence Let A and B be two random events. The conditional probability of A given B is defined as ¥i{A\B}=?ilAB)/Vt{B), (9.13) where AB means both A and B occur and it is assumed that Pr{ B) # 0. That is, only those occurrences of A that are simultaneous with those of B are taken into account. This extends to random variables. For example, if X and Y are two random variables, defined on the same sample space, taking on values xt, i — 1,2, and yp j = l,2..... respectively, then the conditional probability that X — x, given Y = yj is ri[X-xl\Y-yJ)-Vt{X-xi,Y=yJ}/Pr{Y=yJ}. Note that (9.13) rearranges to Pt{AB] = Pr(S}Pr{^|S}. The conditional expectation of X given Y = yj is E[X\Y-yJ] = Y,xlrr{X-xilY-yJ}. i The expected value of XT is (9.14) (9.14A) (9.15) and the covariance of two random variables X and Y is Cov[X,Y]-E[(X-E[X])(Y-E[Y])]=E[XY]-E[X]E[Y]. (9.16) The covariance is a measure of the linear dependence of X and Y. If X and Y are independent, the value of K should have no influence on the probability that X takes on its values. Hence we may define X and Y as independent if Pr{X = xl\Y=yJ} = Pr{X = xi}, (9.17) for all i, / Equivalently, Pr{^T-x„ Y- Pr{X= X,}Pt[Y-yj}, which leads, in the case of independent random variables, to E[XY]-E[X]E[Y]. (9.17A) It also follows that if X and Y are independent, Cov[X, Y]~0 (but note that Cov[X, Y] = 0 does not always imply that X and Y are independent). If X„ i = 1,2.....n, are mutually independent, then - IVarU]. (-1 (9.17B) If j4„ i = 1,2.....n, are mutually exclusive events and at least one must occur, and B is another event, then !*{*}-EPr{4}Pr{*l4}, i-i (9.17C) which is the /aw o/ rota/ probability. Finally, the characteristic function of a random variable Jf is defined as the expectation of the complex random variable exp(iuX), where u varies from — oo to +»: = £[«'"*■], kg (-00,00). (9.18) (Here 1 - /—1.) Characteristic functions are important because there is a one-to-one correspondence between them and distribution functions. It is left as an exercise to show that if X is Poisson with parameter X, then ^(«)-exp[X(j.(u) = exp[(>«- JuV]. (9.18B) 9.3 The quantum hypothesis in synaptic transmission In this section we will obtain a probabilistic description of the small voltage changes that occur at spontaneously active synapses. This will be done in the context of nerve-muscle synapses but the same kind of stochastic model should apply at synapses within the nervous system. In Section 9.1 we mentioned Fatt and Katz's (1952) discovery of randomly occurring miniature endplate potentials at the frog neuromuscular junction. An example of the records they obtained is shown in Figure 9.4. The mean amplitude of the m.e.p.p.'s was about 0.5 mV. This should be compared with the normal postsynaptic response, the endplate potential (e.p.p.), which results when a nerve impulse invades the presynaptic terminal. The e.p.p. has an amplitude between 50 and 70 mV (Kuffler and NichoUs 1976). In a reduced Ca2+ bathing solution the amplitude of the e.p.p. is reduced. Fatt and Katz observed that in such low Ca2+ solutions the amplitudes of the reduced e.p.p.'s were approximately in multiples of the amplitudes of the spontaneous m.e.p.p.'s. The quantum hypothesis (del Castillo and Katz 1954) was advanced that the normal e.p.p. is the result of the almost simultaneous occurrence of several m.e.p.p.'s. 1^. Figure 9.4. Randomly occurring postsynaptic potentials (m.e,p.p.'s) recorded intracellularly from frog muscle fiber. Several records are shown from the same fiber. [From Fatt and Katz (1952). Reproduced with the permission of The Physiological Society and the authors.] It was speculated that the transmitter (acetylcholine) was, in fact, released in packets containing several thousand molecules corresponding to the quantal EPSP's mentioned in Section 1.5. Furthermore, it seemed likely that the synaptic vesicles of diameter about 500 A, seen with the electron microscope to reside in the presynaptic terminal, were, in fact, the packets of transmitter. Probability mode! We assume there are n sites within the nerve terminal at which transmitter release may occur. When an action potential invades the terminal, release occurs at a random number M of sites. The amplitude of the response due to release at each site is random, that at the j'th active site being V.. Each of the V,'t is assumed to have the same probability distribution and the sites act independently of each other. The amplitude of the e.p.p. is then V-V1 + V2+---+V„. (9.19) This is a sum in which the number of terms is random. In the first instance we would assume that M is binomial with parameters n and p, where p is the probability that a site is active. However, if p is small we may use the Poisson approximation to the 40-r 0-1—i—i-* i 1 i 1 i 1 I* i—i—i 0 at 0.3 0J 0.4 0.5 0.6 0J 0.8 0.9 Amplitude of iporuaneouc potentuli (niV) Figure 9.5. Histogram of the amplitudes of the spontaneous m.e.p.p.'s. The smooth curve is a normal density. [From Boyd and Martin (1956). Reproduced with the permission of The Physiological Society and the authors.] binomial, which has the advantage that now the distribution of M contains only one parameter, X = np, the mean number of active sites. The value of \ can be estimated from the fraction of trials, which result in no response, that is, Pi{M = 0} -«~\ (9.20) When M has a Poisson distribution, the random sum V is said to have a compound Poisson distribution. Figure 9.5 shows the histogram of amplitudes of the miniature spontaneous potentials in one preparation. The smooth curve through the histogram is a normal probability density with the same mean and variance. Thus we assume that the V,'s are normal with mean u and variance a2. We are now able to find the density of the amplitude of the e.p.p. V, The law of total probability gives Pr{w< V 0, then there are exactly m terms in the sum (9.19) r-r1+vt+ — +vm. (9.22) /IC ' I I .',i..>. , I. 1-' These terms are independent and each is normally distributed. A basic theorem on the sum of independent normal random variables (Parzen 1962, page 17) tells us that V is normally distributed with mean E[V\M=m]"mE[Vi]=mls., (9.23) and variance Vat[V\M-m] = mVar[Ki] - mo2. (9.24) In the case m = 0 the amplitude has a density, which is a delta function concentrated at v ■ 0 with weight «"x. Putting all this together, we arrive at the probability density of the amplitude of the e.p.p., -exp -(u-mu)J 2mo2 (9.25) which agrees with the formula of Bennett and Florin (1974). It is left as an exercise to show that E[V] = \y., (9.25A) VartFj-X^ + o2). (9.25B) Figure 9.6 shows the histogram of amplitudes of the endplate potentials along with the density predicted by (9.25). It can be seen that there is good agreement between the experimental and theoretical result, thus substantiating the quantum hypothesis. Note that since the normal e.p.p. is about 50 mV, then as many as 100 or more quanta are released when the action potential invades the terminal. Supporting evidence for the quantum hypothesis has been obtained in other preparations (see Martin (1977)]. However, a gamma, rather than a normal, density has sometimes been needed for the unit response [see, for example, Bornstein (1978)] and a binomial analysis has sometimes been required when the Poisson approximation has proven inadequate (Miyamoto 1975; Voile and Branisteanu 1976). Brown, Perkel, and Feldman (1976) showed, with (he aid of computer simulation, that spatial and temporal nonumformities in the release parameters can easily lead to erroneous estimates of these quantities. 9.4 The Poisson process A random process is a family of random variables. We will only be concerned with families of random variables parameterized by a continuous index r, the time. The chief physical random processes r.t inn r uuaun prwKia O.B 1.2 1.6 2.0 2.4 Amptiludv ofand-ploti potmtioll (mV) Figure 9.6. Histogram of amplitudes of the e.p.p. The smooth curve is the density of a compound Poisson random variable as given by (9.25). [From Boyd and Martin 1956). Reproduced with the permission of The Physiological Society and the authors.] with which we will be concerned are the nerve membrane potential and the number of synaptic inputs. A random variable has a fixed probability law, which is easy to describe, but the probability laws of continuous-time random processes are complicated. If we let X(t) denote the value of a general random process at time (, the whole process is the family {X(t), t £ 0), which we sometimes abbreviate to X The first random process we examine is the simple Poisson process. This is important for several reasons: (i) it is a basic, much studied process and can be used as a standard against which some physical processes can be compared; (ii) it may be used to synthesize more complicated processes of interest; (iii) it forms a useful approximation for the inputs to a cell when these are many and unsynchronized. There are many equivalent definitions of the Poisson process [see, for example, Parzen (1962)] one of which is as follows. i ne swcnasiic ucilviiy oj neurons lit Definition {N(t\ rsO} is a simple Poisson process with intensity or mean rate A if: (a) N(0) = 0; (b) given any 0 = l0< tt < t2< ■■■ <'„-i<*„, the random variables N(tk) -iV('i-i), ft-1,2,... 1B, are mutually independent; and (c) for any Oil, < t2, JV(r2) — iV(tt) is a Poisson random variable with probability distribution PrfiV^)-^)-*} (\(<2-*(<2- 0 and let Tx be the time to the first event occurring after s. Then we find that r, is exponentially distributed with mean 1 /A. Proof. The probability that one has to wait longer than t for the first event is the probability that there are no events in (j, j + f]. Thus Pr{7\>r} =?:{N(s + t)-N{s)-0} -e" Thus the distribution function of Tz probability density function pi of 7"; is is 1- Pi{t) = \e- />0, l>0. (9.28) and hence the (9.29) as required. Since s was completely arbitrary, it could have coincided with the time of an event. It may be shown, in fact, that the time interval between events is exponentially distributed with mean 1 /A, Since the average waiting time between events is 1/A, there are, roughly Figure 9.8. Histogram of time intervals between m.e.p.p.'s at the frog neuromuscular junction. [From Fatt and Kate (1952). Reproduced with the permission of The Physiological Society and the authors.] speaking, on average X events per unit time. Thus A is called the mean rate or intensity. Figure 9.8 shows the histogram, obtained by Fatt and Katz (1952), of time intervals between the spontaneous miniature endplate potentials at the frog neuromuscular junction. The histogram has the shape of an exponential distribution. Fatt and Katz were thus led to make their Poisson hypothesis, that the arrival times of the m.e.p.p.'s constituted a Poisson point process [see Van de Kloot, Kita, and Cohen (1975)]. The Poisson process as a primitive model for nerve-cell activity Let the depolarization of a nerve cell be (V(t), rkO}. Suppose that excitatory inputs occur at random in accordance with events in a simple Poisson process {N(l), raO} with mean rate X. Each excitatory input causes V to increase by aE. When V reaches or exceeds the constant threshold level 9 > 0, the cell emits an action potential. Then V(t)-aEN(t), V<$,V(0)-0. (9.30) In this primitive nerve-cell model, what is the probability distribution of the time interval between action potentials? To answer this we first ask what is the waiting time Tk until the k th event in a simple Poisson process after the arbitrary time point s. We will show that Tk has a gamma density with parameters k and A. Proof. The kth event will occur in (j + (, s + f + A() if and only if there are k — 1 events in (s, s + t] and one event in (s + t, s + t + it]. It follows that Pr{r,€(r,r + ir]}---+o(A/), *-l,2,.... (9.31) Hence the density of Tk is , , X(Xr)*~Vx' , , pM~ (fc-l)l ' '>0' ("2) as required. Hence we find that the waiting time for the Arth event has a gamma density with parameters k and X. Thus Tk has mean k/\ and variance k/X2, Some gamma densities are illustrated in Figure 9.9. To return to the primitive nerve-cell model, an action potential is emitted when V reaches or exceeds S, or, equivalents, when N reaches or exceeds 8/aE. Letting [x] denote the largest integer less than x we find that 1 + [9/aE] excitatory inputs are required. Hence the time interval between action potentials has a gamma density with parameters 1 + [8/aE] and X. The mean time interval between action potentials is (1 + [8/aE])/\. The gamma densities, in fact, resemble the ISI histograms obtained for many nerve cells and have often been fitted to them [see, for example, Stein (1965)]. However, the model employed to derive the gamma densities incorporates no decay of membrane potential between excitatory inputs. Only when the cell has an exceedingly large 4 8 Figure 9.9. Gamma densities for X = 1 and k — 1, 2, and 4. [Adapted from Cox and Miller (1965).] fims constant and the rate of incoming excitation is very fast will this approximation be valid. Finally, we note that when 8/aE is large the gamma density becomes approximately that of a normal random variable. 9.5 Poisson excitation with Poisson inhibition We will here consider another primitive model for nerve-cell activity in which excitation and inhibition arrive according to independent Poisson processes. This gives what is commonly called a birth-and-death process, which in this case Feller calls a randomized random walk. Much of what follows is taken from Feller's treatment (1966). Again we ignore the decay of membrane potential between inputs. Despite the unphysiological nature of the model, it is useful because: (i) the model can be analyzed completely thereby providing a standard with which to compare other models and also real nerve cells; and (ii) in a limiting case we obtain a Wiener process (see the next section), which is useful in many situations. Let V(t), t 0, be the depolarization at time t. We assume that the number of excitatory inputs in (0, /] is NE(t), where NE is a Poisson process with mean rate X£, and that the number of inhibitory inputs is N,(t), where N, is a Poisson process with mean rate X,. Each excitatory input makes V jump up by unity, whereas each inhibitory input causes V to jump down by unity. Thus V(t) = NE(t)-N,(t), V(0) = 0,V 0. Let the process V be at m at 1 and suppose that n^m jumps have occurred, of which nl were +1 and n2 were —1. Then we must have n-^ + rtj, (9.38A) m = n,--M2, (9.38B) and hence «,-(m + n)/2, (9.38C) n = m + 2ni. (9.38D) The probability that V(t) — m if n jumps have occurred in (0, (] is the probability that a binomial random variable with parameters n and p takes the value nv That is, Pr[V(t) = m\n jumps in (0,t]) - („" V'?*-"' (9.39) By the law of total probability, Pr{K(/)=m}= £ Pr{7(/) = m|n jumps in (0,/]} ■fen XPr{n jumps in (0,/]}. (9.40) Since n jumps in (0, t] has probability e >l'(\t)"/n\, we find P„U) - e-ir f' ( ^1 i,(«+-Vi,(»--J/Ii (9.41) where the prime on the summation sign indicates that summation is over either even or odd n depending on whether m is even or odd, respectively. Utilizing (9.38) and the fact that n — m, m + 2, m + 4,... implies n2 = 0,1,2,..., this becomes (Af) m + 2n2 „£0(™ + 2»2)l\ m + "= In terms of the modified Bessel function, If(x) £0*ir(* + P + i) + P + 1)(2) (9.42) (9.43) we get V" PJ')=\'^\ £-%(2r/M7). 9.5.1 Time of first passage to threshold We assume that there is a fixed threshold 8, which when reached by V leads to the emission of an action potential. The threshold condition is an imposed one and after a spike the potential is artificially reset to zero, possibly after a dead time or refractory period. Passage to time-varying thresholds in this model does not seem to have been considered. We let 0 be a positive integer and seek the rime of firs! passage of V to 8, which is identified with the interspike interval. To find the probability distribution of the first-passage time, we employ the method of images in the symmetric case (^E~^i) an<* ^ renewal equation in the asymmetric case. (A) Symmetric case: method of images We will first find p*(t), the probability that the randomized random walk is at level m at t but has stayed below 8 up to time t. This is in distinction to />„('), which includes passages below, to, and above 8. Consider Figure 9.10, where a randomized walk process U is shown starting from the image point 28 and having the value m at t. There is a one-to-one correspondence between such paths of U and Figure 9.10. Paths considered in the method of images. those of V that start at 0, touch and/or cross the level 8 in (0, t), and end up at m at t. By symmetry the probability assigned to such paths is Pit-m(t)- These paths are excluded in computing p* so we have ltf(0-A.(')-ft»—(')■ (9-44) To obtain the probability density /((() of the time of first passage to level 8, note that a first passage to 8 occurs in (t, t + ir] if V has stayed below 8 in (0, (J, is, in fact, at 8 — 1 at t, and a jump of +1 occurs in (r, l + ht]. The probability of a jump of +1 in (r, t + At] is kE At ■ (X/2) Ar. Putting these probabilities together gives /,(0A»-A*-,(0(V2)A'. (9.45) Utilizing (9.43) and (9.44) and the fact that \E - A/t we find /iW-^IU^l.-WM]. «>0. (9.46) A more succinct expression results on using the recurrence relation for the modified Bessel function, /•-it» - W*) = (28/x)I,(x), (9.47) whereupon /,(')- («/f)e-x'/»(2A£0, <>0. (9.48) (B) General case: the renewal equation When the rates of arrival of jumps up and down are not equal, we resort to another method for obtaining fs(t). The idea on which the method is based is illustrated in Figure 9.11. A path is shown starting at zero and attaining the value m > 8 at (. Since m > B, 0 I-1-1 Figure 9,11. Paths that lead to the renewal equation, such a path must have at some time before t passed through the level 6 and, in particular, at some time t' < t done this for the first time. Integrating over all such paths gives, using a continuous version of the law of total probability, Pm(0-/'/.('')/•--#(' -0*'- (9A9) This integral equation is called a renewal equation, which we will solve using Laplace transform methods. Note that the probability of a transition from 8 at (' to m at t is the same as the probability of a transition from 0 at time zero to m - 8 at / -1' (spatial and temporal homogeneity). Equation (9.49) is a convolution of fs and pm_t. From Table 3.2 we see that the Laplace transform of the convolution of two functions is the product of their Laplace transforms. Thus, denoting Laplace transforms by the extra subscript L, .-- #k*M(»a <9'54) Moments of the firing time Let 7"j be the (random) time taken for V to reach 8 from the initial state zero (resting state). Then Ts has the probability density function f6. The nth moment of Ts is /"*"/.(<) *■ (9-55) ■'o When « = 0 we obtain the total probability mass of Te concentrated on (0, m). That is, Po-M?« X, the mean and variance of the firing time can be found with the aid of the following relation (Gradshteyn and Ryzhik 1965, page 708): fe—I,(Px)dx- , . (9.58) (/^-^(a+i/a2-/?2)' This yields E[T,] = Var[r(] ■V i3 (9.59) (9.60) The coefficient of variation, the standard deviation divided by the mean, is CV[7i] - 1/2 (9.61) l*(*«-Af)j which indicates that the coefficient of variation is inversely proportional to the square root of the threshold for fixed rates of excitation and inhibition. When \E = X„ although Ts < oo with probability one, the mean (and higher-order moments) of Ts is infinite. Note that we have assumed that the excitatory and inhibitory jumps of V are of unit size. If instead the jumps have magnitude a, so that F(0-«[*«(')-**(»)]. (9-62) then with a threshold 8 > 0, not necessarily an integer, the time to get to threshold will be the time for the process with unit jumps to reach [1+#/«]. Tails of the firing-time density Using the following asymptotic relation for the modified Bessel function at large arguments (Abramowitz and Stegun 1965, page 377), we deduce that when there is Poisson excitation and inhibition, AM ~ ^ <(A7-y57)J 2\M /»(XA)W 4tf2-l l6tJXFX (9.64) whereas when there is Poisson excitation only, we have the exact result AM-[*'/<*- (9.65) Thus the density of the first-passage time to level 6 is quite different in its tails, depending on the presence or absence of inhibition, 9.6 The Wiener process We will soon proceed to more realistic models, which incorporate the decay of membrane potential between synaptic inputs. Before doing so, we consider an approximation to the process V defined in the previous section. The approximating process is a Wiener process (or Brownian motion), which belongs to the general class of Markov processes called diffusion processes. These general concepts will be explained later. Gerstein and Mandelbrot (1964) pioneered the use of the Wiener process in neural modeling. Diffusion processes have trajectories that are continuous functions of /, in distinction to the randomized random walk whose sample paths are discontinuous. The study of diffusion processes is often less difficult than that of their discontinuous counterparts chiefly because the equations describing their important properties are differential equations about which more is known than differential-difference equations, which arise for discontinuous processes. Among the reasons for studying the Wiener process as a model for nerve membrane potential are: (i) it is a thoroughly studied process and many of the relevant mathematical problems have been solved; and (ii) from the Wiener process we may construct many other more realistic models of nerve-cell activity. The Wiener process as a limiting case of a random walk Consider the process defined by P.M =«[**(')-*,(')]. *i0, (9.66) where a is a constant, and Jv"B and N, are independent Poisson processes with mean rates XE=X, = X. The process V„ has jumps up and down of magnitude a. We note that E{Va(t)]~a[XE-X,]t = 0, (9.67) Var]X(l)] =a2[Var[JV£(/)] +\a[N,(t)]] =2a2Xt. (9.68) The characteristic function of Va(t) is fe(u;*)-£[ap(aiK.(0)] = £[exp(to(^(t)-^(/)))I = E[exp(iuaNEU))]E[exp(-iuaN[(t))] (9.69) by the independence of N£ and N,. From Section 9.2 we find &(»;f) = exp{X/(e"", + s(u; t) = e >"!' = 0. (9.76) An attempt is made in Figure 9.12 to depict some sample paths for W. Although the paths of W are smooth enough to be continuous, they Figure 9.12. A sketch of two sample paths of W. Mwwuitc activity o] neurons 13S 9.6 The Wiener process 139 are, with probability one, nondifferentiable. Nevertheless, the "derivative" of W, denoted by w, is called white noise and is a useful concept. Whenever w appears in an equation, an integration is implied. Wiener process with drift We may construct new processes from W by multiplying it by a constant and adding a linear drift. Thus X(t)~x0 + aW(l) + iit, t>0, (9.77) where X(0) = x0 defines a Wiener process with variance parameter a and drift parameter u. Linear operations on Gaussian (normal) processes, such as W, produce Gaussian processes. Since E[X(t)]=x0 + pt, (9.78) Vai[jr(r)l =o2t, (9.79) the density of X(t) is ,, ■> 1 / (*-*o-m021 /*(*.')~~iT=T--T%- • -0. y2ir 1, Pr{7;>/} = Pr( max X(t') < »\X(0) =x0) - f [p(x,t\xa)-p(x,t\28-xa)]dx. (9.88) Now (again roughly speaking) transitions from 28 — x0 to X occur with the same probabilities as transitions from x0 to 28 — x, so p(x, t\2S-x0)-p{28 - x, t\x0). Thus, since Pr{rs > t} = 1 - Pr{Je <; r), t>0, Pr{71s0-l-r [p{x,t\x0)-p(28-x,i\x0)] dx. (9.89) Changing variables in the second integral yields Pr{7J<;()=2/ /'(JC fl^o)^ -/^N-^}*- <»»» With the additional change of variable z - (x - x^/a-ft, we get Pr{rfl<;0 = l/-J™ e"'1/30,8>xa. (9.92) y.o ine wiener process -iti Letting t -* oo in (9.91), we see immediately that passage to level 8 is certain. (B) The general case including p^O: renewal equation Using the same argument as for the general randomized random walk, we obtain the renewal equation p(x,l\xa)= (%(l')p(x,t-t'\8)dt', x>8. (9.93) Taking Laplace transforms and rearranging gives pL(x,s[x0) pL(x,s\8) (9.94) With the aid of the standard transform (Abramowitz and Stegun 1965, page 1026), is we find exp exp (*-*o) (9.95) (9.96) where c— —)i2/2a1. It is left as an exercise to show that the Laplace transform of the first-passage time density is A.i.<*)-«^{^^(*-vF+2^) (9.97) The inversion of this transform is facilitated by another standard result (Abramowitz and Stegun 1965, page 1026), ■■e-xp(-h/s~), k>0, (9.98) which yields the inverse Gaussian density: hit)- exp (e-x0-at) 2 0, 8 > xn. (9.99) 1-t.1 7.V ± riC T¥ »CMC* £tl ■ -' o : . Moments of the firing time We may find the probability that X ever reaches 8 by utilizing the relation f,,M -/""/.(') dt = Pt{t,<}i 2|p!(*-*0) paO, p<0. (9.102) Thus, if the drift is zero or toward the threshold, an action potential is generated in a finite time with probability one. On the other hand, as with the random-walk model, if the drift is away from the barrier so that aE\E0, Jar,. (9.103) (9.104) (9.105) (9.106) When p — 0 the first- and higher-order moments of Te are infinite, as they must also be when p < 0. In terms of the original physiological parameters of the model, assuming the initial value of the membrane potential is resting level, the mean ISI is E[TS] = (9.107) aE\E—OjXr and the variance is V*T,]-t}'2*' + a*'}, asXs>aiX,. (9.108) This gives a coefficient of variation of cv[r(]. c|Ag+a?A/ \fi(ÖEXE-a/A,)3 1/2 (9.109) Again, for fixed values of the remaining parameters, the coefficient of variation of the ISI is inversely proportional to the square root of the threshold. A numerical example With time in units of the membrane time constant and voltages in millivolts, we will find the mean, variance, coefficient of variation, and density of the firing time for the following parameter values: 9 = 10mV, a£-a, = lmV, A£ = 2.5, and A, = 0.5. Then ■ E[TS]=5, Var[r(] = i, CV[r,] = 0.27. The density of Tt is sketched in Figure 9.14. Figure 9.14. Density of the firing time for the Wiener process with drift model with parameters as given in the text. i rte nucnaaiK ut-ttutty uj rteututtx 1-11 Figure 9.15. The fitting of an experimental ISI histogram to the first-passage time density of a Wiener process with drift. [From Gerstein and Mandelbrot (1964). Reproduced from The Biophysical Journal by copyright permission of The Biophysical Society.] Although the Wiener process with drift has a poor physiological foundation as a model for nerve-cell membrane potential, formula (9.99) for the first-passage time density has been successfully fitted to the ISI histograms of real neurons. For example, Gerstein and Mandelbrot (1964) put/$(r) in the form f,(t) = KrVkxp(-a/t-bt), (9.110) regarding a and b as parameters with K determined by the normalization condition /o%(r) dt = 1. One case of their fitting procedure is shown in Figure 9.15 for a cell in the cat cochlear nucleus. Although the agreement between experimental and theoretical firing-time distributions is excellent in this and other cases, the fitting procedure is of limited use. The model itself does not incorporate the realities of nerve-cell behavior, and the parameters are not related to the physiological variables. 9.7 Markov processes In the following few sections we will need some basic results fmm iht* thf»nrv nf Mnrknn nrnct>vsfs This sectioTi contains a brief review of the pertinent results. Technicalities are omitted as only the applications are needed. For advanced mathematical details and foundations see such books as those of Dynkin (1965), Breiman (1968), and Gihman and Skorohod (1972,1974, 1975, 1976). To say that {X(t); t ^ 0} — X is a Markov process is to say that if X is known at time s, then the probabilities that X takes on its various possible values at any future time are completely determined. Transition probability functions Markov processes are characterized by their transition probability functions. Let {X(t), t a 0} be a Markov process in continuous time. Suppose at time s, X is known to have the value x. We call P(y,t\x,s) = ?r{X(t) (z-x-p(t-s)) 2o2(t-s) dz, (9.113) and transition probability density function p(y,t\x,s) = J2iro2(t-s) exp (y-x-ii(t-s))2 2a2(t-s) (9.114) Infinitesimal generators Markov processes admit a characterization through operators that describe the changes in the process in small time intervals. Let X be a Markov process in continuous time. Then the infinitesimal operator, or infinitesimal generator, of X is defined for suitable functions / through lim A/10 E[f(X(> + At)) -f(X(t))\X(t) = x] At (9.115) The infinitesimal operator of a process can be calculated from a knowledge of its transition probability function. In the theory of Markov processes it is shown how the reverse step can be carried out through the Kolmogorov partial differential equations. Examples (i) Poisson process. For the Poisson process, it X(t) = x, then X(t + At) = x + 1 with probability XA/ + o(Ar) and X(t+At)**x with probability 1 - X At + o(At). Thus E[f(X(t + At))-f(X(t))\X(l)-x] = X Atf(x + 1) + (1 - X Ar)/( x) -f(x)+ o(At). Putting this in (9.114) and carrying out the limiting operation gives (^/)(*) = X[/(x + l)-/(*)]. (ii) Randomized random walk. A similar calculation shows that for the process of Equation (9.81) (^/) (x) - XEf(x + aE) + X,/(* - a,) - (XE + Xr)f(x). (iii) Wiener process with drift. For the Wiener process with drift, use of (9.114) leads to o2 d2f df Diffusion processes Roughly speaking, diffusion processes are continuous-time Markov processes whose sample paths are continuous. Such processes are characterized by their infinitesimal mean E[x(t + At) — X(t)\X(t)"x] «00 ■ lim AHO At (9.116) and infinitesimal variance Vat[x(t + At)-X(t)\X(t)=x] ß2(x)- lim Ano Ar The infinitesimal generator of such a process is (9.117) (9.118) Diffusion processes can be described by their stochastic differential equations. The process with infinitesimal generator (9.118) has the stochastic differential dX-a(X)dt + ß(X) dW, (9.119) where If is a standard Wiener process. The differential relation is defined by its integral version X(t) = X0 + f'ct(X(f)) df + f/3(X(t')) dW(t'), (9.120) where the initial condition is X(0) = X0. The first integral in (9.120) is a Riemann integral but the second is a stochastic integral with respect to W (ltd 1951; Stratonovich 1966). This integral is defined and some of its properties are given, for example, in Jaswinski (1970). Processes with jumps Consider a Markov random process Y that has jumps of various magnitudes. Let v(t, A) record the number of jumps of Y up to time t that have magnitudes in the set A. Suppose for fixed A,v(t,A) is a temporally homogeneous Poisson process with mean rate H(A) depending on A. Then E[r(t, A)] = tU(A). Suppose further that if A„ j' = l,2.....n, are disjoint sets, then y(t,Ai), v(t, A2),..., r(t, A„) are mutually independent. If we integrate over all possible jump amplitudes, we recover the original process Y, Y(t)- fut>(t,du), (9.121) and the total jump rate (i.e., mean rate of jumps of all magnitudes) is A - f n(du). (9.122) jr The process Y is a compound Poisson process. If the rate measure IT has a density so that Tl(du) - du, then a = / du. (9.123) Example For the randomized random walk, jumps of magnitude +aE and —a, occur with mean rates \E and \„ respectively. Then «•(«) = \ES(u - aE) + \rS(u + a,) and the total mean rate of jumps is A= I tt(u) du = \E + \j. JK A stochastic differential equation can be written down, which describes a general Markov process with diffusion and jump components dX = a{X)dt + B(X)dW+ f y(X, u)v(dt, du). (9.124) •'r Again the differential is an abbreviation for the integral equation X(t)-X(0)+ f'a(X(t'))dt' + f'tS{X(t'))dW(t>) + f'f y{X{t'),u)v(df,du), (9.125) where the third integral is a stochastic integral with respect to v. It may be shown (Gihman and Skorohod 1972) that the infinitesimal generator of the process defined by (9.125) is (j//)(*) = «(*) J + ^ % + /r/(* + y(x, u))u{du) - a/. (9.126) First-exit times The part of the theory that concerns us most is the theory of exit times as these directly relate to the random firing of neurons. Suppose X(0) = * where a < x < b. The first-exit time of A" from the interval (a, b) is the random variable Tat(x) = M(t\X(t)<£(a,b)}, X(0) -x^(a,b). (9.127) Let the distribution function of Tab(x) be F.b(x,t) = ft{Tab(.x)^t). (9.128) Then (Tuckwell 1976a) Fab can be found as the solution of at xe(a,b),t>0, (9.129) where sf is a partial differential-integral operator, the infinitesimal operator of x. The initial condition is ^4(*>0) = (9.130) l£(«,J), [1, x<£{a,b), and with boundary conditions F.4(*,0-1. x«(a,6),*i0. (9.131) Differentiating Fab with respect to t, we get the density of the first-exit time fab(x, t), which satisfies the same equation (9.129) as Fti with boundary conditions fab{x,t)-S{t), x€(a,b), (9.132) /„»(*.<>) = 0, x^(a,b). (9.133) In the case of diffusion processes, where exit from (a, b) is attained by hitting either a or b, the condition x€(a,b) can be replaced by x — a or x - b. The moments of the first-exit time are defined through