%TAMS(#205, handled by D. Rudolph, accepted 8/22/96)
%last version 14.january.97
\magnification=\magstep1
\hfuzz=2pt
%%%%%%%%%%%%
%%%%%%%%%%%%%%% structure de page %%%%%%%%%%%%%%%%%%%%%%%%%%%%
\headline={\ifnum\pageno=1 \hfil\else\hss{\tenrm\folio}\hss\fi}
\footline={\hfil}
\hoffset=-.2cm
%%%%%%%%%%%%
%%%%%%%%%% polices %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\font\titlefont= cmbx10 scaled \magstep3
%\font\draftfont= cmsl10 scaled \magstep3
%\font\titlefont= cmbx10 scaled \magstep1
%\font\titlefont= cmbx10
\font\draftfont= cmsl10
%\font\titlefont= ambx10 scaled \magstep1
%%%%%%%%%
%%%%%%%%%% general math defs %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\def\integer{\mathchoice{\rm I\hskip-1.9pt N}{\rm I\hskip-1.9pt N}
%{\rm I\hskip-1.4pt N}{\rm I\hskip-.5pt N}}
%\def\integer{\mathchoice{\bf N}{\bf N}{\bf N}{\bf N}}
%\def\real{\mathchoice{\rm I\hskip-1.9pt R}{\rm I\hskip-1.9pt R}
%{\rm I\hskip-.8pt R}{\rm I\hskip-1.9pt R}}
%\def\real{{\bf R}{\bf R}{\bf R}{\bf R}}
\def\integer{{\bf N}}
\def\real{{\bf R}}
\def\Real{\real}
\def\Log{\mathop{\rm Log}}
\def\Romannumeral#1{\uppercase\expandafter{\romannumeral#1}}
\def\date{\line{\number\day/\number\month/\number\year\hfil}}
\font\smallrm=cmr8
\def\bv{{\cal B}}
\def\norm{|||}
\def\normt{\norm_\theta}
%\def\draft{\line{\draftfont DRAFT\ \ \
%\number\day/\number\month/\number\year\hfil}\bigskip}
\def\expectation{\mathchoice{\rm I\hskip-1.9pt E}{\rm I\hskip-1.9pt E}
{\rm I\hskip-.8pt E}{\rm I\hskip-1.9pt E}}
%\def\zinteger{{\rm Z\hskip-1.9pt\slash}}
%\def\zinteger{\mathchoice{{\bf Z}{\bf Z}{\bf Z}{\bf Z}}}
\def\zinteger{{\bf Z}}
\def\Oun{{\cal O}(1)}
\def\oun{{\hbox{\sevenrm o}(1)}}
\def\proof{\noindent{\bf Proof. }}
\def\longto{\mathop{\relbar\joinrel\relbar\joinrel\relbar%
\joinrel\relbar\joinrel\longrightarrow}}
\font\bigmath=cmmi10 scaled \magstep2
%\font\bigmath=cmmi10
\def\bigchi{\hbox{\bigmath \char31}}
%\def\bigchi{\chi}
\def\bigpi{\hbox{\bigmath \char25}}
%%%%%%%%
%%%%%%%%%%% definitions pour les references %%%%%%%%%%%%%%%%%%%%%%%%%
\newskip\refskip\refskip=4em
\def\refsize{\advance\leftskip by \refskip}
\def\ref#1#2{\noindent\hskip -\refskip\hbox to
\refskip{[#1]\hfil}{\noindent #2\hfil}\medskip}
%%%%%%%%
%%%%%%%%%%%%%%%%% local defs %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\def\proba{\mathchoice{\rm I\hskip-1.9pt P}{\rm I\hskip-1.9pt P}
{\rm I\hskip-.9pt P}{\rm I\hskip-1.9pt P}}
\def\logd{\log\log}
\def\shift{{\cal S}}
\def\bvnorm{|||}
\def\bvn{\bvnorm}
\def\bvnt{\bvnorm_{\theta}}
\def\exo{\par\noindent{\bf Exercise. }}
\gdef
\cqfd{\par\hfill\vrule height12pt width7pt depth-3pt\hbox to .5cm{}}
%%%%%%%%%%%%
%%%%%%%%%%%% code starts here %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\draft
\font\smallrm=cmr8
\def\bv{{\cal B}}
\def\norm{|||}
\def\normt{\norm_\theta}
\bigskip
\centerline{\titlefont On the Enhancement of Diffusion by Chaos,}
\bigskip
\centerline{\titlefont Escape Rates and Stochastic Instability.}
\bigskip
\bigskip
\centerline{{\bf Pierre Collet}
\footnote{$^\circ$}{\smallrm C.N.R.S., Physique Th\'eorique, Ecole
Polytechnique, 91128 Palaiseau Cedex, France.
\hfill\break
e-mail: collet@orphee.polytechnique.fr},
{\bf Servet
Mart\'\i nez}\footnote{$^*$}{\smallrm Universidad de Chile,
Facultad de Ciencias F\'{\i}sicas y Matem\'aticas, Departamento de
Ingenier\'{\i}a Matem\'atica, Casilla 170-3 Correo 3, Santiago,
Chile.\hfill\break
e-mail:
smartine@dim.uchile.cl},
{\bf Bernard Schmitt}\footnote{$^\dagger$}{\smallrm Universit\'e de
Bourgogne, D\'epartement de Math\'ematiques, Facult\'e de Sciences
Mirande, BP-138, 21004 Dijon Cedex, France.\hfill\break
e-mail: schmittb@satie.u-bourgogne.fr}}
\vskip 7cm
{\noindent\sl Abstract.} We consider stochastic perturbations of
expanding maps of the interval where the noise can project the
trajectory outside the interval. We estimate the escape rate as a
function of the amplitude of the noise and compare it with the purely
diffusive case. This is done under a technical hypothesis which
corresponds
to stability of the absolutely continuous invariant measure
against small perturbations of the map. We also
discuss in detail a case of instability and show how stability can be
recovered by considering another invariant measure.
\vfill\supereject
\beginsection{I. INTRODUCTION.}
It has been stated several times in the Physics literature that
``diffusion enhances chaos''. The purpose of this paper is to
investigate
quantitatively this question for a family of one dimensional
dynamical
systems (piecewise expanding maps of the interval) with small
stochastic perturbations. In the presence of noise
the trajectory may jump outside the interval. We will compare the
typical time of occurrence of this event for the case of a pure
random walk and of a chaotic dynamics stochastically perturbed.
The effect of stochastic perturbations on
the long time behavior of some chaotic dynamical
systems has received a lot of attention in the past few years (see
[F.W.]) but
we will be interested here in a different problem connected with the
decay of the
total probability. This problem can be formulated generally
as follows. Start
with a map $\Phi$ of a phase space $\Omega$ which has
an ergodic
invariant measure $\mu$. In other words one is interested by
the sequence of points in the phase space recursively generated by
$$
x_{n+1}=\Phi(x_n)\;.
$$
Assume now this
deterministic process is perturbed by a small stochastic
fluctuation occurring every time step. If the stochastic perturbation
maps $\Omega$ into itself, the total probability will be conserved.
However if there is a non zero probability that the small stochastic
fluctuation maps
the point outside of phase space, the total probability
will decrease. Mapping outside the phase space is not a well defined
concept at
this point but one can imagine for example that $\Omega$ is a
subspace of a larger set $\tilde\Omega$ such that on the set
$\tilde\Omega\backslash\Omega$, $\Phi$
and the stochastic map reduce to the identity.
In Physical terms, the problem we have in mind is to consider a large
assembly of identical independent particles evolving in a box $\Omega$
according to the
deterministic map $\Phi$ and small stochastic perturbations. If a
particle escapes the box, it will never come back again (it
dies!). This is what we call leaking.
A natural quantity associated to this problem is the rate of
decay of the number of particles in the box for different type of maps
$\Phi$ (escape rate). As mentioned above, in order to
investigate the influence of chaos on this escape rate
we will consider two types of maps of the unit
interval: the
identity map and regular expanding maps. In both cases we
will estimate the rate of escape (of decay of the total probability)
when small stochastic perturbations are superposed to these maps. We
will see that
for small stochastic perturbations, the rate of escape for
the stochastic map is
much larger for the chaotic map than for the identity
map.
A simple argument can be given as follows. If one considers a pure
random walk with
steps of size $\epsilon>0$, it follows intuitively from
the central limit theorem that most particle will need a
time of order $\epsilon^{-2}$ to travel a distance of order one needed
to leave
the interval. On the other hand if we have a chaotic map with a
stochastic perturbation of size $\epsilon$, using
Birkhoff's ergodic theorem (see [C.G.] for a rigorous analysis without
noise) it
appears that a typical trajectory will need a time of order only
$\epsilon^{-1}$ to reach a neighborhood of size $\epsilon$ of the
boundary where it has a sizable probability to jump outside. We will
prove indeed in theorems I and II that the above intuitive results are
correct.
Our simple model above can be seen as an oversimplification of the
Poincare sections of two dimensional flows, one leading to a
completely integrable two dimensional map, the other one to a chaotic
dynamic. We also refer to [B.W.Z.] for other arrangements of dynamical
systems and physical applications.
In section II we
will formulate our hypotheses on the dynamical systems
and the stochastic perturbations, and describe the main results about
the typical escape times. These results will be proven in section II
using techniques
developed previously in [C.G.] for controlling some (large)
perturbations of transfer operators.
A basic hypothesis for controlling perturbations of
piecewise expanding maps of the interval is that small invariant
segments cannot occur (see [Ke.], [B.Y.], [B.K.], [B.K.S.]).
In section IV we will
consider such a
situation where the basic stochastic stability question
seems to fail. However
in the presence of an invariant segment absorbing
asymptotically all the probability it is natural to study the
trajectories which will never penetrate into this segment. Using
techniques similar to those developed in section III we prove in a
simple case that the trajectories are asymptotically distributed
according to an invariant measure supported by a Cantor set,
and it is this measure which in
the limit converges to the SRB measure of the initial unperturbed
map. In other words,
in the presence of small invariant segment produced
by perturbation of a mixing map, it is a singular invariant measure
which converges (weakly)
to the SRB measure of the unperturbed map (when
the perturbation
goes to zero) and not the SRB measure of the perturbed
map. Although we
prove the result here in a special case, it is easy to
extend our method
to a larger class of situations and we conjecture that
a similar phenomenon occurs whenever small invariants segments are
produced by perturbations of mixing piecewise expanding maps. When the
trajectories can exit from the phase space, instead of looking for
stationary distributions one should look for quasi-invariant
distributions as those described in this work. When the phase space is
not compact
their existence as well as the study of the process involved
is non trivial. In this context, see [F.K.M.P.] for the study of
quasi-invariant distributions for Markov chains and [C.M.SM.] for
diffusions.
We recall that if $(X_t)$ is a Markov process and $\tau$ is the
first exit time of some Borel subset $I$ of the phase space then a
probability measure $\mu$ is a quasi-invariant distribution
if $\proba_\mu\{ X_t\in C | \tau> t\}=\mu$ for all Borel subset
$C\subset I$.
\beginsection{II. ESCAPE RATES FOR CHAOS AND DIFFUSION.}
We will denote by $\bv$ the vector space of functions with bounded
variation on the interval $[0,1]$. We will
sometimes implicitly identify this class
of functions with their extensions vanishing outside $[0,1]$.
This space will be equipped with
different (equivalent) norms which depend on a parameter
$\theta>0$. These norms are given by
$$
\norm g\norm_\theta=\theta \vee g+\|g\|_1
$$
where $\|\;\|_p$ is the $L^p$
norm with respect to the Lebesgue measure,
and $\vee g$ is the variation of $g$.
It is easy to verify that equipped with any of these
norms, the space $\bv$ becomes a Banach space denoted below by ${\cal
B}_\theta$. For simplicity we will denote by $\norm\;\norm$ the norm
corresponding to $\theta=1$, and again by ${\cal B}$ the associated
Banach space. Notice
that all the norms $\norm \;\norm_\theta$ are equivalent.
We will denote by $f$ a
piecewise regular expanding map of the interval $[0,1]$.
This is a map which satisfies the following hypotheses.
\proclaim{Hypothesis H1}. {
\item{i)} There is a finite increasing sequence of $l+1$ points
$a_0=00$ and a number $\rho>1$ such
that $\forall n$
$$
\inf_x|f^{n'}(x)|\ge A\rho^n\;.
$$}
It is well known (see [L.Y.], [C.]) that such maps admit an absolutely
continuous invariant
measure. There is also a finite decomposition into
mixing components which are
also regular expanding maps of some interval,
and we shall assume below that one of these components has been
selected.
\proclaim{Hypothesis H2}. {The map
$f$ has a unique absolutely
continuous invariant measure $h_0\,dx$ which is ergodic and mixing.}
The basic tool to
establish the above results is the so called transfer
operator defined for a map $f$ satisfying {\bf H1} by
its action on $\bv$,
$$
Pg(x)=\sum_{y\, :\,f(y)=x}{g(y)\over|f'(y)|}.
$$
Since the domain of $f$ is the interval all preimages $y$
belong to $[0,1]$.
Below several maps will be considered and when there is some ambiguity
we will denote by $P_{f}$ the transfer operator of the map $f$.
The densities of the
absolutely continuous invariant measures are the (non
negative) eigenvectors of $P$ of eigenvalue 1.
Under hypothesis {\bf H1}, it is known ([L.Y.]) that $P$ is a bounded
linear operator in ${\cal B}$. Moreover
there is a fundamental estimate of
Lasota and Yorke ([L.Y.])
stating that there is a number $\tilde\alpha<1$, and a
number $C>1$ such that for any function
$g\in{\cal B}$ and for any integer $n\ge 0$ we have
$$
\vee(P^ng)\le C\left(\tilde\alpha^n\vee g+\|g\|_1\right)\;.\eqno(LY)
$$
Under the stronger hypothesis {\bf H2}, the peripheral spectrum
of $P$ consists of the only simple eigenvalue 1, with a non negative
eigenvector denoted by
$h_0$ and eigenform $e_0$ which is the integration
against the (normalized) Lebesgue
measure of the interval. We shall also
assume that $h_0$ is of
integral 1. The rest of the spectrum is contained
in a disk of radius strictly smaller than one, and from the spectral
decomposition Theorem (see [H.K.]) one concludes that there exists a
non negative number $r_0<1$, a
positive constant $B$ and an operator $R$
commuting with $P$ such that
$$
P^n=h_0e_0+R^{n}\quad{\hbox{\rm with for any integer $n$}}\quad \norm
R^n\norm \le Br_0^n\;. \eqno(1)
$$
$h_0$ is the density of the absolutely continuous invariant measure.
We now describe the stochastic perturbations of the map $f$. A simple
example would be to consider a sequence of independent identically
distributed random variables $(\xi_n)$ with values in the
interval $[-1,1]$. For a positive number $\epsilon$ we can now define
recursively a Markov process $(X_n)$ by $X_0=x$ and
$$
X_{n+1}=f(X_n)+\epsilon\xi_n\;.
$$
We will denote below by $f_{\xi}$ this random map, namely
$$
f_{\xi}(x)=f(x)+\epsilon\xi\;.
$$
One can extend the results below to more general cases of random
maps. We will however
explain the results and give complete proofs only
for the above type of
random perturbations, with the {\sl i.i.d.} random
variables $(\xi_n)$
having a probability density $\varphi$ supported by
$[-1,1]$, even and $C^{1}$.
This insures that the
average of $\varphi$ is zero and then there is no
drift in the noise.
On the other hand compactness
of the support of $\varphi$ is assumed for
simplicity, but most of the results can be proven with less
stringent assumptions.
Of course, if some $f(X_p)$ happens to be at a distance less than
$\epsilon$ of the boundary (0 or 1), there is a non zero probability
that $X_{p+1}$ will be outside the interval $[0,1]$ where the map $f$
is undefined.
Associated to the stochastic perturbation, there is also a transfer
operator ${\cal U}$ given by
$$
{\cal U}g(x)=\expectation \left(P_{f_\xi}g(x)\right)
=\int\sum\limits_{y:f_\xi(y)=x}{g(y)\over |f'_\xi(y)|}\varphi(\xi)d\xi
\;.
$$
It is easy to verify that if a random variable $X$ with values in
$[0,1]$ has a
density $g$ with respect to the Lebesgue measure, then if
$\xi$ is independent of $X$, the random variable $f_\xi(X)$ has a
density ${\cal U}g$ with respect to the Lebesgue measure.
If we ask about
the probability that $f_\xi(X)$ is also in $[0,1]$, it is given by
$$
\int_0^1{\cal U}g(x)dx\;.
$$
More generally,
if $X_0$ has distribution with density $g$ supported by
the unit interval, we have for any integer $n\ge0$
$$
\proba\{X_j\in[0,1]\;,\;j=0,\cdots,n\}=\int_0^1Q^ng(x)dx\;,
$$
where
$$
Qg(x)=\bigchi_{[0,1]}(x){\cal U}g(x)\;,
$$
and $\bigchi_A$ denotes the characteristic function of the set $A$.
Note that $Q$ can be written as $Q=\tilde QP$ where $\tilde Q$ is the
operator given by
$$
\tilde Qg(x)=\bigchi_{[0,1]}(x)\expectation_{\xi}
(\bigchi_{[0,1]}(x-\epsilon\xi)
g(x-\epsilon\xi))\;.
$$
One of the goals of this paper is to study
the behavior of the sequence
$\proba\{X_j\in[0,1]\;,\;j=0,\cdots,n\}$ for large $n$ and small
$\varepsilon$. We will see that
for $\epsilon$ small enough, this sequence decays exponentially fast
with $n$. In the particle interpretation given in the introduction,
this means that
the number of
particles which stay in the interval up to time $n$ decays
exponentially fast. We will establish below (under
adequate hypotheses on the stochastic perturbation) that this
decay rate
is $-1/\log\lambda$ where $\lambda$ is the largest eigenvalue
of $Q$.
Another
interesting question is about the distribution of the remaining
particles. Namely we condition the
process $(X_n)$ to stay in the interval, and we ask for the asymptotic
law of this process. We will see below that asymptotically, under the
condition that the process stayed in $[0,1]$ up to time $n$, the
random variable $X_n$ is distributed according to the normalized
quasi invariant measure corresponding to the eigenvalue $\lambda$.
These results will be proven using perturbation theory between $P$ and
$Q$. Intuitively,
if $\epsilon$ is small, one expects $P-Q$ to be small.
However it
turns out that this is not so in $\bv$ because by stochastic
perturbations
the intervals of discontinuities of the map can move. As a
consequence, in the variation norm, $P-Q$ may be of order one.
This problem is solved below by using several
adequate values $\theta$ in the norm $\norm\quad \norm_\theta$
to balance more
optimally the size of the perturbation terms.
The technique of
introducing several norms to balance seemingly non small
terms in perturbation theory of Ruelle Perron Frobenius operators was
introduced in [C.G.] (see also [C.]).
It has since been used to prove several results:
[B.Y.], [B.I.S.].
We now make an
important hypothesis on the orbits of the points where $f$ is
not regular, that is to say the points $(a_j)$,
which will ensure the stability of
the stochastic perturbation. It is known (see [Ke.], [B.Y.]) that
if this condition is not satisfied, one may observe stochastic
instability. In particular we want to
prevent the occurrence of small invariant segments by small
perturbations of $f$.
We refer to [Ke.], [B.Y.], [B.K.] for similar
hypotheses.
In Part~2 we will give a detailed analysis of a typical
example, resolving this (apparent) instability.
\proclaim{Hypothesis H3}. {\item{(1)} Any periodic point $a_j$
from the definition of $f$ is either a point of discontinuity or
$f'(a^-_j)f'(a^+_j)>0$.
\item{(2)} For any integer $m\ge0$, we have
$$
f^{m+2}(\partial{\cal A})\cap \{0,1\}=\emptyset.
$$
where $\partial{\cal A}=\{a_0,...,a_\ell\}$.}
A similar hypothesis was also introduced in [C.G.]. We observe that 2)
is not satisfied for the simple map $2x\pmod 1$. In fact 2) is not
really needed in that case, one can still prove the theorem below at
the cost of using more detailed estimates.
\proclaim{Theorem 1}. {Assume hypotheses {\bf H1}-{\bf H3} are
satisfied and $\varphi$ even and $C^1$, supported in $[-1,1]$.
Then for $\epsilon<1$ small enough, there is a number $a>0$
given by
$$
a=\left[-h_{0}(0_{+})
\int_{-1}^{0}\xi
\varphi(\xi)d\xi+h_{0}(1_{-})\int_{0}^{1}
\xi\varphi(\xi)d\xi\right]\;.
$$
such that the operator $Q$ has a
peripheral spectrum consisting of a unique positive simple eigenvalue
$\lambda(\epsilon)$ which satisfies the estimate
$$
\lambda(\epsilon)=1-a\epsilon+o(\epsilon)\;.
$$
The associated non negative (integrable) eigenvector $h_{\epsilon}$
normalized by the condition
$$
\int h_{\epsilon}\,dx=1
$$
is the density of an
absolutely continuous quasi invariant distribution
with respect to
the time of first exit of the process $(X_{n})$
from the interval $[0,1]$. Moreover,
if we denote by $\tau$ this exit time
then the following limit exists
$$
\lim_{n\to\infty}\proba_{x}\{X_{1}\in A_{1},\cdots,X_{k}\in A_{k}
\,|\,\tau>n\}=\tilde{\proba}_{x}\{X_{1}\in A_{1},
\cdots,X_{k}\in A_{k}\}\;,
$$
and defines a law on trajectories which never leave the interval.
$\tilde{\proba}$
is given by the Markov kernel $M$ defined on $[0,1]$by
$$
M(x,dy)=\lambda(\epsilon)^{-1}{h_{\epsilon}(y)\over h_{\epsilon}(x)}\;
\varphi\left({y-f(x)\over\epsilon}\right){dy\over\epsilon}\;.
$$
The eigenform $e_\epsilon$ corresponding to $\lambda(\epsilon)$
is a positive measure,
the probability measure $h_{\epsilon}de_{\epsilon}$
is invariant for the kernel $M$.
}
As explained in the introduction,
we want to compare the above result to
a situation where there is no deterministic chaotic dynamic but
essentially an
integrable one. For simplicity we shall take for $f$ the
case of the identity map, and define the associated stochastic
perturbation by simply adding independent noises, namely
$$
f_\xi(x)=x+\epsilon\xi\;.
$$
The iteration generates of course a random walk with independent
increments distributed as $\xi$.
The transfer operator is simply given by $\tilde Q$.
\proclaim{Theorem 2}. {There is a number $\epsilon_d>0$ such that for
$\epsilon\in ]0,\epsilon_d]$
the operator $\tilde Q$ has a peripheral
spectrum
consisting of a unique simple eigenvalue $\lambda_d(\epsilon)>0$
satisfying
$$
1>\lambda_d(\epsilon)>1-{\pi^{2}m_2\over 2}
\epsilon^{2}+\Oun\epsilon^{3}\;,
$$
where $m_2$ is the second moment of the random variable $\xi$
$$
m_2=\int_{-1}^{1}\xi^{2}\varphi(\xi)d\xi\;.
$$}
We conjecture that $\lambda_d(\epsilon)=1-{\pi^{2}m_2\over 2}
\epsilon^{2}+\Oun\epsilon^{3}$. The difficulty in
estimating the spectrum of $\tilde Q$ is that when $\epsilon$ tends to
zero, this operator formally converges to the identity which has an
infinitely degenerate eigenvalue 1.
As explained in the introduction, the above two results provide a
quantitative difference between the
escape rates in the two cases with and without chaos.
\beginsection{III. PROOFS OF THEOREMS I AND II.}
\proclaim{Proposition 3}. {Under condition {\bf H1},
and {\bf H3(1)} there are
numbers $1>\alpha>\tilde\alpha$,
$\beta>C>1$, and $1>\epsilon_0>0$ such that for any
$\epsilon\in[0,\epsilon_0]$, for any
function $g\in\bv$ and any integer
$n\ge 0$ we have
$$
\vee((Q^n-P^n)g)\le \beta\alpha^n\vee g+\beta\|g\|_1\;,\eqno(2)
$$
$$
\int|Qg-Pg|dx\le \beta\epsilon(\vee g+\|g\|_1)
\quad \hbox{and also}\quad \|Q\|_1\le 1\;.\eqno(3)
$$}
\proof
The first estimate (2) follows from the (LY) estimate and from the
bound
$$
\vee(Q^ng)\le \beta_1\alpha_1^n\vee g+\beta_1\|g\|_1\;,
$$
where $0<\alpha_1<1$ and $\beta_1>0$.
The proof of this estimate requires only minor modifications from the
proof of a similar result in [B.Y.]. We
leave these modifications to the
reader.
We now prove the second part of Proposition 3.
We will first prove that
$$
\|(\tilde Q-I)g\|_{1}\le {\cal O}(1)\epsilon(\vee g +\|g\|_{1})
$$
from which the first part of (3) follows
using the estimate (LY) since $Q-P=(\tilde
Q-I)P$.
We have obviously
$$
\|(\tilde Q-I)g\|_{1}=\int_{0}^{\epsilon}|(\tilde Q-I)g(x)| dx
+\int_{\epsilon}^{1-\epsilon}|(\tilde Q-I)g(x)| dx +
\int_{1-\epsilon}^{1}|(\tilde Q-I)g(x)| dx\le
$$
$$
4\epsilon \|g\|_{\infty}+
\int_{\epsilon}^{1-\epsilon}|(\tilde Q-I)g(x)|
dx \le 4\epsilon (\vee g+\|g\|_{1})+
\int_{\epsilon}^{1-\epsilon}|(\tilde Q-I)g(x)|
dx \;.
$$
Since $g$ is a function of bounded variation, there is a positive
measure $\nu$ with total mass $\vee g$ such that if $0\le a**0$, we have
$$
\norm Q^ng-P^ng\norm_\theta\le
$$
$$
\beta\alpha^n\theta\vee g
+\beta\theta\|g\|_1+{\beta^2\epsilon\over 1-\alpha}
\vee g+\beta n\epsilon (1+\beta)\|g\|_1
\le\beta\left[\alpha^n+{\beta\epsilon\theta^{-1}\over 1-\alpha}
+\theta+
n\epsilon(1+\beta)
\right]\norm
g\norm_{\theta}\;.
$$}
This follows at once from the previous result and the
definition of the
$\theta$ norm.\cqfd
A first step towards a proof of Theorem 1 is the following result.
\proclaim{Proposition 5}. {There is a number $n_{1}>0$ such that for
$\epsilon$ small enough, the operator $Q$ has only a simple positive
eigenvalue $\lambda(\epsilon)$ outside the disk of radius
$2^{-1/n_{1}}$. The eigenvector
$h_{\epsilon}$ can be chosen nonnegative
of integral one. The associated
eigenform $e_{\epsilon}$ is a positive
measure which satisfies
$$
|e_{\epsilon}(1)-1|\le \Oun\epsilon\log\epsilon^{-1}\;.
$$}
\proof
Let $0<\theta_0<1$ be a positive number that we will fix below.
By a direct computation we have
$$
\norm e_0\norm_{\theta_0}=1
\quad\hbox{and}\quad \norm h_0\norm_{\theta_0} \le
(1+C)\;.
$$
This implies
$$
\norm P_0\norm_{\theta_0}\le (1+C)\; .
$$
Now,
since $P_0$ a projection of rank one, it is easy to compute explicitly
its resolvent, namely
$$
R^0_\zeta=(P_0-\zeta)^{-1}={P_0\over 1-\zeta}-{I-P_0\over\zeta}\;
$$
and we get
$$
\sup_{|\zeta|=1/2}\norm R^0_\zeta\norm_{\theta_0}\le 6(1+C)\;.
$$
Now choose for $n$ the smallest integer $n_1$ such that
$$
(1+C)^3\left[(B+\beta)r_0^{n_1/2}+
\beta\alpha^{n_1}\right]<1/36\;. \eqno(4)
$$
and then take
$$
{\theta_0}=r_0^{n_1/2}\;.
$$
It follows from our
previous estimates that there is a
number $0<\epsilon_1<\epsilon_0<1$ such
that for any $\epsilon\in[0,\epsilon_1[$
$$
\sup_{|\zeta|=1/2}\norm R^0_\zeta\norm_{\theta_0}\;\, \norm
R^{n_1}+(Q^{n_1}-P^{n_1})\norm_{\theta_0} < 1\;.
$$
From now on, we will always assume that $\epsilon$ is smaller than
$\epsilon_1$.
Since $Q^{n_1}=P_0+R^{n_1}+(Q^{n_1}-P^{n_1})$,
by perturbation theory (see [K.II.\S 3])
we deduce that $Q^{n_1}$ has a
unique simple eigenvalue $\gamma(\epsilon)$
outside $D_{1/2}$ and the rest of
the spectrum is contained inside that disk.
It follows easily from (4) that
$$\norm\int\limits_{|\zeta|=1/2}R^0_\zeta(R^{n_1}+Q^{n_1}-P^{n_1})R_
\zeta d\zeta\norm_{\theta_0}<{1\over 1+C},$$
where $R_\zeta$ denotes the resolvent of $Q^{n_1}$.
Let
$$\hat h_\epsilon =(I-\int\limits_{|\zeta|={1\over 2}}
R_\zeta d\zeta)h_0,$$
from the resolvent equation it follows that
$$\hat h_\epsilon=h_0+\int\limits_{|\zeta |={1\over 2}}
R_\zeta^0(R^{n_1}+Q^{n_1}-P^{n_1})R_\zeta h_0 d\zeta.$$
From our previous estimates and
$\epsilon>0$ small enough we find $e_0(\hat h_\epsilon)>0$.
Therefore, the eigenvector
$h_\epsilon$ normalized by $e_0(h_\epsilon)=1$
(namely $h_\epsilon=(e_0(\hat h_\epsilon))^{-1}\hat h_\epsilon$)
satisfies
$$
\norm h_\epsilon\norm_{\theta_0}\le\Oun
$$
uniformly in $\epsilon$, and by equivalence of norms,
$$
\norm h_\epsilon\norm\le\Oun{\theta_0}^{-1}\eqno(5)
$$
also uniformly in $\epsilon$ small enough.
The spectral properties for $Q$
follow immediately. Indeed, we conclude that
$Q$ has a simple eigenvalue
$\lambda(\epsilon)$ outside of the disk of
radius $2^{-n_1}$ with eigenvector $h_\epsilon$. $\lambda(\epsilon)$
must be one of the complex
$n_1$ roots of $\gamma(\epsilon)$ but since $Q$ is a
positivity preserving operator, $\lambda(\epsilon)$ must be the only
positive one (since $Q^{n_1}$ is a positivity preserving operator,
$\gamma(\epsilon)$ is real and positive).
We now prove the estimate on $e_{\epsilon}(1)$. From the spectral
property of $Q$ and perturbation theory we have for any integer $N$
$$
Q^{N}=h_{\epsilon}\lambda(\epsilon)^{N}e_{\epsilon}+T^{N}
$$
where one can choose $\epsilon_{1}>0$ small enough such that there are
two constants $C_{1}>0$ and $r_{2}\in[0,1[$
such that for any $\epsilon\in]0,\epsilon_{1}]$ we
have
$$
\norm T^{N}\norm\le C_{1}r_{2}^{N}\;.
$$
This implies (since $e_{0}(h_{\epsilon})=1$)
$$
1+e_{0}\left((Q^{N}-P^{N})1\right)=
e_{0}(Q^{N}1)=\lambda(\epsilon)^{N}e_{\epsilon}(1)+{\cal O}(r_{2}^{N})
\;.
$$
On the other hand we have
$$
e_{0}\left((Q^{N}-P^{N})1\right)=\sum_{j=0}^{N-1}e_{0}
\left(P^{j}(Q-P)Q^{N-j-1}1\right)=\sum_{j=0}^{N-1}e_{0}
\left(Q-P)Q^{N-j-1}1\right)=\epsilon N\Oun
$$
by Proposition 3 and (LY). In other words, we have
$$
|\lambda(\epsilon)^{N}e_{\epsilon}(1)-1|\le \Oun(\epsilon N
+r_{2}^{N})\;.
$$
We take $N_0=[\log\epsilon^{-1}]$
and denote $\chi=\lambda(\epsilon)^{N_0}$,
so the last equation applied with $N=N_0$ and $N=2N_0$ gives
$|\chi e_\epsilon(1)-1|\le\eta_1$, $|\chi^2e_\epsilon(1)-1|\le\eta_2$
with $\eta_1$ and $\eta_2$
or order ${\cal O}(1)\epsilon\log\epsilon^{-1}$.
The second inequality reads
$$1+\eta_2>\chi(\chi a)>1-\eta_2$$
and for $\epsilon$ small enough
combining with the first inequality we get
$${1+\eta_2\over 1-\eta_1}>\chi>{1-\eta_2\over 1+\eta_1}$$
and finally by using again the first inequality we obtain
$${(1+\eta_1)^2\over 1-\eta_2}>e_\epsilon(1)>
{(1-\eta_1)^2\over 1+\eta_2}.$$
By taking into account that $\eta_i\sim
{\cal O}(1)\epsilon\log \epsilon^{-1}$ for $i=1,2$ we find the result.
\cqfd
\def\heps{h_{\epsilon}}
We now prove some intermediate results that will
allow us to improve the
conclusions of Proposition 3.
\proclaim{Lemma 6}. {If $\varphi$ has compact support in the interval
$[-1,1]$, there is (for $\epsilon$ small enough)
a nonnegative function
$\omega_{\epsilon}$ with support in $[0,\epsilon]\cup[1-\epsilon,1]$
bounded by one and given by
$$
\omega_{\epsilon}(x)=\bigchi_{[0,1]}(x)
\int_{-1}^{1}\varphi(\xi)d\xi\left(1-\bigchi_{[0,1]}
(x+\epsilon\xi)\right)\;.
$$
such that for any integrable function $g$ with support in
$[0,1]$ we have
$$
e_{0}(\tilde Q-I)g=-\int_{0}^{1}\omega_{\epsilon}(x)g(x)dx\;.
$$
Moreover,
$$
e_{0}(\tilde Q-I)h_{0}=-\epsilon\left[-h_{0}(0_{+})
\int_{-1}^{0}\xi
\varphi(\xi)d\xi+h_{0}(1_{-})\int_{0}^{1}
\xi\varphi(\xi)d\xi\right]
+o(\epsilon)\;.
$$}
\proof
One easily gets by a linear change of variables the expression for
$\omega_{\epsilon}(x)$
from which the first part of the Lemma follows immediately. The second
part follows since a function of bounded variation has a left and a
right limit.\cqfd
We will denote below
by $\sigma$ the number (larger than 1)
$$
\sigma=\sup_{x}|f'(x)|\;
$$
and by $d$ we denote the usual distance on the interval.
\proclaim{Lemma 7}. {There is a constant $K_1>0$ such that for any
integer $n>0$ and for any $1>\epsilon>0$ satisfying
$3(1+\sigma)^{n}\epsilon\le\delta_{0}=
\inf_{0\le j
\epsilon(1+\sigma)^{n}\}}
|(Q^{n}1)'(x)|\le K_1^{n}\;.
$$}
\proof The proof is recursive. We first
define three positive constants
by
$$
C_{1}=\sup_{x}|f'(x)|^{-2}\;,C_{2}=\sup_{x}|f''(x)|/|f'(x)|^{3}\;,
C_{3}=\sup_{x,n}Q^{n}1(x)+1\;.
$$
The last constant is finite because the non negative function $Q^{n}1$
has an integral bounded by one and a uniformly bounded variation by
Proposition 1.
It is useful to introduce the set $V_{n}$ defined by
$$
V_{n}=\left\{x\,:\,d(x,\cup_{j=1}^{n}f^{j}(\partial{\cal A}))\le
\epsilon(1+\sigma)^{n}\right\}\;.
$$
We have
$$
Qg(x)=\expectation_{\xi}\left\{\sum_{I\in {\cal A}_{\xi}}
{g(\psi_{I}(x,\xi))\bigchi_{f_{\xi}(I)}}(x)\over
|f'(\psi_{I}(x,\xi))|\right\}\eqno (6)
$$
where ${\cal A}_{\xi}$ denotes
the set of subintervals $I\subset [0,1]$
where $f_{\xi}$
is monotone, regular and maps $I$ into $[0,1]$, and $\psi_{I}(x,\xi)$
denotes the inverse of $f_{\xi}$
from $f_{\xi}(I)$ to $I$. Observe that
for $\epsilon$ small enough, $\hbox{card}({\cal A}_{\xi})=l$.
Note that from the definition of
$f_{\xi}$ any point in the boundary of
an interval $f_{\xi}(I)$ with $I\in {\cal A}_{\xi}$ is at a distance
at most $\epsilon$ of $f(\partial{\cal A})$. From
the previous formula it is then immediate to verify that for
$x\notin V_{1}$ we have
$$
|Q1'(x)|\le lC_{2}lC_{2}$.
We now assume that the Lemma has been proven up to an integer $n>1$.
We define
$$
\tilde V_{n+1}=\{x\,:\,\psi_{I}(x,\xi)\in V_{n} \hbox{ for some $\xi$
with $|\xi|\le 1$ and some $I\in {\cal A}_{\xi}$}\}\cup V_{1}\;.
$$
Since $Q^{n}1$ is differentiable on the complement of $V_{n}$, it
follows easily that $Q^{n+1}1$ is
differentiable on the complement of
$\tilde V_{n+1}$. Moreover using the
bound on the derivative of $Q^{n}1$
and the explicit expression (6) for $Q$ we have since $f'=f_{\xi}'$
$$
\sup_{x\notin\tilde V_{n+1}}|(Q^{n+1}1)'(x)|\le
l(C_{2}C_{3}+C_{1}K_1^{n})\le K_1^{n+1}
$$
if $K_1=l(C_{2}C_{3}+C_{1})+1$.
We finally check that
$\tilde V_{n+1}\subset V_{n+1}$. First of all, by
definition, $V_{1}\subset V_{n+1} $. We now observe that if
$\psi_{I}(x,\xi)\in V_{n}$, then
$$
x\in f_{\xi}(V_{n}\cap\cup_{I\in{\cal A}_{\xi}}I)\;.
$$
We also observe that
$V_{n}$ is a finite union of intervals $J$ of width
at most
$2\epsilon(1+\sigma)^{n}$ each centered at a point of
$\cup_{j=1}^{n}f^{j}(\partial{\cal A})$.
There are now two cases.
Either $f_{\xi}$ is
differentiable on $J$, in which case $f_{\xi}(J)$ is an interval
containing a point $\zeta$
of $\cup_{j=2}^{n+1}f^{j}(\partial{\cal A})+\epsilon\xi$
and each point of $f_{\xi}(J)$ is at a distance less than
$\epsilon+\epsilon\sigma(1+\sigma)^{n}$ of $\zeta$. Therefore
$f_{\xi}(J)$ is contained in
$V_{n+1}$. In the other case,
since $\epsilon(1+\sigma)^{n}<\delta_{0}/2$,
$f_{\xi}(J)$ is the union of at most two intervals
containing a point of
$\cup_{j=1}^{n+1}f^{j}(\partial{\cal A})+\epsilon\xi$ . It follows as
before that these
two intervals are also in $V_{n+1}$.\cqfd
We remark that using better estimates on the derivatives, in the above
Lemma one can replace the factor $K^{n}_1$
by a constant uniform in $n$.
\proclaim{Corollary 8}. {There is a constant $K>0$ such that for any
integer $n>0$ and for any $\epsilon>0$ satisfying
$3(1+\sigma)^{n}\epsilon\le\delta_{0}=
\inf_{0\le j
\epsilon(1+\sigma)^{n}\}}
|(PQ^{n-1}1)'(x)|\le K^{n}\;.
$$}
The proof is similar to the proof of the previous Lemma and is left to
the reader.\cqfd
\proof(of Theorem 1).
Let $N$ denote a large integer (but of order $o(\log\epsilon^{-1})$)
depending on $\epsilon$ to be fixed later on.
We first have easily since $e_{0}(\heps)=1$
$$
\lambda-1=e_{0}((Q-P)\heps)=\lambda^{-N}e_{0}((Q-P)Q^{N}\heps)
$$
$$
=\lambda^{-N}e_{0}((Q-P)P^{N}\heps)+
\lambda^{-N}e_{0}((Q-P)(Q^{N}-P^{N})\heps)
$$
$$
=\lambda^{-N}e_{0}((Q-P)h_{0})+
\lambda^{-N}e_{0}((Q-P)(P^{N}\heps-h_{0}))
+\sum_{j=0}^{N-1}\lambda^{-j-1}e_{0}((Q-P)P^{j}(Q-P)\heps)\;.
\eqno(7)
$$
From the first equality and Lemma 6 we get
$$
\lambda-1=e_{0}((\tilde Q-I)P\heps)=\int_0^1\omega_
\epsilon(x)P\heps(x)
dx ={\cal O}(\epsilon)\;.\eqno(8)
$$
We now observe using again $e_{0}(\heps)=1$ and equation (1) that
$P^{N+1}\heps-h_{0}=v$ with
$$
\vee v+\|v\|_{1}\le Br_{0}^{N+1}(\vee \heps+\|\heps\|_{1})
\le\Oun r_{0}^{N} \; ,
$$
since $\theta^{-1}_0={\cal O}(1)$ and (5).
This implies using Lemma 6 that
$$
|e_{0}(Q-P)(P^{N}\heps-h_{0})|=|e_{0}(\tilde Q-I)(P^{N+1}\heps-h_{0})|
=\left|\int_0^1\omega_\epsilon(x)v(x)dx\right|
\le C\epsilon r_{0}^{N}
$$
where the constant $C$ is uniform in $\epsilon$ small.
This estimates the second term of (7).
Observe also from (8) that for $N=o(\log\epsilon^{-1})$
$$
|\lambda^{-N}-1|={\cal O}(1)N\epsilon
$$
which implies
$$
|(\lambda^{-N}-1)e_{0}((Q-P)h_{0})|\le{\cal O}(1) \epsilon^2 N\;.
$$
Using perturbation theory with $\theta=\theta_{0}$ we have for
$\epsilon$ small enough
$$
\heps= \lambda (\epsilon)^{-N}Q^N 1+ \lambda(\epsilon)^{-N}u
$$
with
$$
\vee u+\|u\|_{1}\le \theta_{0}^{-1}
\norm u\norm_{\theta_{0}}\le K r_{1}^{N},
\eqno(9)
$$
where $r_{1}=2^{-1/n_{1}}$ and $K$ is a constant independent of
$\epsilon$ small.
We have as above
$$
\eqalign{
&\lambda(\epsilon)^{-N}
\left|\sum_{j=0}^{N-1}\lambda^{-j-1}e_{0}((Q-P)P^{j}(Q-P)u)\right|\cr
&=
\lambda(\epsilon)^{-N}
\left|\sum_{j=0}^{N-1}\lambda^{-j-1}e_{0}((\tilde Q-I)
P^{j+1}(Q-P)u)\right|\cr
&\le\lambda(\epsilon)^{-N}\sum\limits^{N-1}_{j=0}\lambda^{-j-1}
\|w_\epsilon\|_1\|P^{j+1}(Q-P)u\|_\infty
\le{\cal O}(1) \epsilon N r_{1}^{N}\;,}
$$
where for the last inequality we use (LY), Proposition 3, Lemma 6
and (9).
We finally have to estimate for $j>0$ the quantity
$$
e_{0}((Q-P)P^{j}(Q-P)Q^{N}1)\;.
$$
Note that
$$
e_{0}((Q-P)P^{j}(Q-P)Q^{N}1)=
e_{0}((\tilde Q-I)P^{j+1}(\tilde Q-I)PQ^{N}1)\;.
$$
Using Lemma 6 and the elementary properties of $P$, we have
$$
e_{0}((\tilde Q-I)P^{j+1}(\tilde Q-I)PQ^{N}1)=
-\int_{0}^{1}\omega_{\epsilon}(f^{j+1}(x))((
\tilde Q-I)PQ^{N}1)(x)dx\;.
$$
The idea now is to locate the support of
$\omega_{\epsilon}\circ f^{j+1}$. We define the sequence of numbers
$\delta(n)$ for $n>0$ by
$$
\delta(n)=\inf_{0\le s\le
n}d\left(f^{-s-1}(\{0,1\}),
\cup_{j=1}^{n}f^{j}(\partial{\cal A})\right)\;.
$$
Note that from hypothesis {\bf H3}
we have for any $n>0$ $\delta(n)>0$.
We denote by $\eta$ the number
$$
\eta=\inf_{x,n}|(f^{n})'(x)|\;.
$$
Note that from the
expansivity it follows that $\eta>0$. Note also that
by definition $\delta(n)$ is decreasing in $n$. We now claim that if
$\delta(s+1)>\epsilon/\eta$ and
$\epsilon$ is small enough, the support
of $\omega_{\epsilon}\circ f^{s+1}$ is contained in
$$
\left\{x\,:\,d(x,f^{-s-1}(\{0,1\}))<\epsilon/\eta\right\}\;.
$$
The proof is again recursive, the case $s=0$ is from Lemma 6. We now
observe that if $a$ is a point such that $f^{s+1}(a)\in\{0,1\}$, and
$\delta(s+2)>\epsilon/\eta$,
then the interval $J$ of width $\epsilon/\eta$
around $a$ does not meet
$f(\partial{\cal A})$. Therefore the preimages
of $J$ are well defined and the induction follows.
We now have
$$
e_{0}((\tilde Q-I)P^{j+1}(\tilde Q-I)PQ^{N}1)=-
\sum_{a\in f^{-j-1}(\{0,1\})}
\int^{a+\epsilon/\eta}_{a-\epsilon/\eta}
\omega_{\epsilon}\circ f^{j+1}(x)(\tilde Q-I)PQ^{N}1(x)dx\;.
$$
We now impose $\delta(N)>\epsilon/\eta$, and also
$$
\epsilon\sigma^{N}+{\epsilon\over\eta}\le\delta(N)/2\;.
$$
Note that when $\epsilon$ tends to zero we can also assume that $N$
tends to infinity with $N\le o(\log\epsilon^{-1})$.
This implies for $0\le s\le N$
$$
\left\{x\,:\,d(x,\cup_{j=1}^{N}f^{j}(\partial{\cal A}))\le
\epsilon\sigma^{N}\right\}\cap \hbox{\rm supp}\;\omega_{\epsilon}\circ
f^{s+1}=\emptyset.
$$
Observe that if $g$ is differentiable on an interval
$[a,b]\subset[\epsilon,1-\epsilon]$ and such
that $|g'_{[a,b]}|\le R$, then
$$
\sup_{x\in[a+\epsilon,b-\epsilon]}|\tilde Qg(x)-g(x)|\le R\epsilon\;.
$$
Therefore we get using Corollary 8 and $N=o(\log\epsilon^{-1})$
$$
\left|\sum_{j=0}^{N-1}\lambda^{-j-1}e_{0}(
(Q-P)P^{j}(Q-P)Q^{N}1)\right|
\le N\epsilon^{2}K^{N}l^{N}/\eta\;.
$$
Grouping together all the previous estimates we get the estimate on
$\lambda(\epsilon)$.
We have also for any function $u$ of bounded variation
$$
\expectation_{h_\epsilon dx}(u(X_1)|X_1\in[0,1])=
{\int_0^{1} u(x)Qh_\epsilon(x)dx\over \int_0^{1}Qh_\epsilon(x)dx}
=\int_0^{1} u(x)h_\epsilon(x)dx\;,
$$
which means that the measure $h_\epsilon(x)dx$ is quasi invariant.
In order to prove the last part of Theorem 1, note that for
$n>k$ the Markov
property implies
$$
\proba_{x_{0}}\big\{X_{1}\in dx_{1},\,\cdots\,,X_{k}\in
dx_{k}\,|\,\tau>n\big\}
={\proba_{x_{k}}\{\tau>n-k\}\over \proba_{x_{0}}\{\tau>n\}}\;
\proba_{x_{0}}\big\{X_{1}\in dx_{1},\,\cdots\,,X_{k}\in
dx_{k}\big\}
$$
$$
=\prod_{l=0}^{k-1}
{\proba_{x_{l+1}}\{\tau>n-l-1\}\over \proba_{x_{l}}\{\tau>n-l\}}\;
\proba_{x_{l}}\big\{X_{1}\in dx_{l+1}\big\}\;.
$$
We have also
$$
\proba_{x}\big\{X_{1}\in dy\big\}=
\varphi\left\{{y-f(x)\over\epsilon}\right\}\;{dy\over \epsilon}\;.
$$
On the other hand, form the spectral decomposition
$$
Q^{n}=h_{\epsilon}\lambda(\epsilon)^{n}e_{\epsilon}+T^{n}
$$
and the bounds on $T^{n}$ we get
$$
{\proba_{y}\{\tau>n-1\}\over \proba_{x}\{\tau>n\}}=
{Q^{n-1}1(y)\over
Q^{n}1(x)}={h_{\epsilon}(y)
\lambda(\epsilon)^{n-1}e_{\epsilon}(1)+T^{n-1}1(y)
\over h_{\epsilon}(x)\lambda(\epsilon)^{n}e_{\epsilon}(1)+T^{n}1(x)}
\longto_{n\to\infty}\lambda(\epsilon)^{-1}{h_{\epsilon}(y)\over
h_{\epsilon} (x)}\;.
$$
It follows now at once from the above expressions that the limit law
$$
\lim_{n\to\infty}
\proba_{x_{0}}\big\{X_{1}\in dx_{1},\,\cdots\,,X_{k}\in
dx_{k}\,|\,\tau>n\big\}
$$
exists and defines a Markovian process with transition kernel
$$
M(x,dy)=\lambda(\epsilon)^{-1}
{h_\epsilon(y)\over h_\epsilon(x)}\varphi
\left(
{y-f(x)\over \epsilon}\right){dy\over\epsilon} .$$\cqfd
\proof (of Theorem 2). We
first observe that the operator $\tilde Q$ has
a kernel $\epsilon^{-1}\varphi((x-y)/\epsilon)$. It
follows easily since
$\varphi$ is $C^{1}$
that this operator is compact in $\cal B$. It is also
compact (in fact Hilbert Schmidt) in $L^{2}$ and maps this space in
$\cal B$. Therefore the spectrum in $\cal B$ and in $L^{2}$
coincide. Moreover, since $\varphi$ is symmetric $\tilde Q$ is a self
adjoint operator in $L^{2}$ which maps positive functions to positive
functions. It is also obvious that there is a large iterate which is
positivity improving. Therefore by the Krein-Rutmann theorem ([K.R.])
the peripheral spectrum is
composed of a simple positive eigenvalue. The
result follows at once from a
trivial application of the minimax principle using the trial
function $\sin(\pi x)$ [K.I.\S 10].\cqfd
\beginsection{IV. ANALYSIS OF A DETERMINISTIC PERTURBATION.}
We consider the simple map $f_0(x)=2x\pmod 1$. We then choose a
(small) positive number $1/2>\epsilon>0$ and define the interval
$K=[1-\epsilon,1]$ and the discontinuous map $f_\epsilon$ by
$$
f_\epsilon(x)=\cases{f_0(x)& if $0\le x<1-\epsilon$,\cr
1-2|x-1+\epsilon/2|& if $1-\epsilon\le x\le 1$.\cr}
$$
The map $f_\epsilon$
has the interval $K$ as an invariant segment and one can
show that for Lebesgue
almost any initial condition, the trajectory will
end in $K$ (this will also
be a consequence of the results below). As we
will see below,
$f_\epsilon$ has a unique a.c.i.m ergodic and mixing with support
in $K$ (and we have the SRB property with respect to the Lebesgue
measure of the interval $[0,1]$). Therefore, if we let our parameter
$\epsilon$ tend to zero the SRB measure of $f_\epsilon$ converges to
the Dirac measure at the point 1
which is not the SRB measure of $f_0$
(the Lebesgue measure). This
example is a cooked up version of an example of Keller [Ke.] where the
perturbed map develops an
invariant segment. We conjecture however that
the analysis developed below is far more general and could be extended
to perturbations of
maps which have a periodic point in the boundary of
their defining partition.
Let $P_0$ and $P_\epsilon$ denote the transfer operators of the
maps $f_0$ and $f_\epsilon$ respectively. We will be interested in
estimating the spectrum of
$P_\epsilon$ and compare it to the spectrum
of $P_0$ for $\epsilon$ small. Our
main result is as follows, using the
notation $\sigma(\cdot)$
for the spectrum of an operator, and $D_r$ for
a disk of radius $r$ in the complex plane centered at the origin.
\proclaim{Theorem 9}. {There is a number $1>r>0$
such that in the space of
functions of bounded variation
\item{ i)} $\sigma(P_0)=\{1\}\cup\Sigma_0$ with $\Sigma_0\subset D_r$.
Moreover, 1 is a simple eigenvalue with the eigenvector 1, and the
eigenform $e_0$
is given by the integration over the Lebesgue measure.
\item{ ii)} There is a number $\epsilon_0>0$ such that if
$\epsilon\in]0,\epsilon_0]$, there is a number
$1>\lambda(\epsilon)>1-o(1/\log\epsilon^{-1})$ such that
$$
\sigma(P_\epsilon)=\{1,\lambda(\epsilon)\}\bigcup\Sigma_\epsilon\quad
\hbox{with}\quad \Sigma_\epsilon\subset D_r\;.
$$
Moreover, 1 and
$\lambda(\epsilon)$ are simple eigenvalues, with eigenvectors
$\bigchi_K$ and $h_K$ respectively. $h_K$ is a non negative function
which converges to $1$ (in the $L^{1}$ norm) if $\epsilon\to0$. The
measure $h_K dx$ is a quasi invariant measure for
$f_\epsilon$ on $K^c$. The eigenform
$\mu_K$ is a probability measure, and the measure $h_Kd\mu_K$ is an
invariant measure for $f_\epsilon$
which converges weakly to the Lebesgue measure when $\epsilon\to0$.
}
So we see that in this case the problem of the non convergence of the
perturbation is resolved in the following way. The
SRB measure for the map
$f_\epsilon$ converges to the Dirac measure at the point 1. There is
however another measure, corresponding to an eigenvalue near 1
but smaller than one which does converge to the SRB measure of $f_0$.
Technically, when looking at the Lasota and Yorke estimate for the
transfer operator $P_\epsilon$,
one gets large constants because of the
presence of the small invariant segment. It is however natural in the
presence of an invariant segment to worry about the trajectories which
will never enter that segment (or enter only after a very long time).
These trajectories are asymptotically distributed according to
an invariant measure which is supported by an invariant Cantor set.
This measure
was essentially constructed in [C.G.], but for the convenience of the
reader we will repeat the
proof in the present context (which turns out
to be slightly simpler). It turns out that
the number $-1/\log\lambda(\epsilon)$ is
exactly the (exponential) escape rate from the interval $K^c$ (the
complement of $K$), and $h_{K}d\mu_{K}$
is an invariant measure for the
dynamics on the invariant Cantor set.
The assertion i)
of the above theorem is of course easy and well known,
we have repeated it only for completeness of the result.
We will denote below by $Q$ the operator
$$
Q=P_\epsilon\bigchi_K\;.
$$
Note that since $K$ is invariant we
have $Q=\bigchi_K P_\epsilon\bigchi_K$.
This operator maps functions on $[0,1]$ into functions with support in
$K$. Moreover, if one considers
the functions with support in $K$, it is
the transfer operator of $f_\epsilon|_{K}$. So
we have also the following easy
and well known result.
\proclaim{Proposition 10}. { On the
Banach space of functions of bounded
variation with support in $K$, we have for some number $0r_2>0$ and a number $\epsilon_0>0$ such that if
$\epsilon\in]0,\epsilon_0]$, there is a number
$1>\lambda(\epsilon)>1-1/\log\epsilon^{-1}$ such that in the
space of functions of bounded variation
$$
\sigma(S)=\{\lambda(\epsilon)\}\bigcup\Omega_\epsilon\quad
\hbox{with}\quad \Omega_\epsilon\subset D_{r_2}\;.
$$
Moreover,
$\lambda(\epsilon)$
is a simple eigenvalue, with eigenvector denoted by
$h_K$. $h_K$ is a non negative function
which converges
to $1$ if $\epsilon\to0$. The measure $h_K dx$ is quasi
invariant. The eigenform
$\mu_K$ is a positive measure, the measure $h_Kd\mu_{K}$ is an
invariant
measure and $\mu_K$ converges weakly to the Lebesgue measure if
$\epsilon\to 0$.}
\proof As
explained above, we are going to apply perturbation theory in
a Banach space with an equivalent norm. Let first $q$ be a positive
number large enough, to be fixed later on independently of $\epsilon$.
We denote by $\Delta_q$ the operator
$$
\Delta_q=S^q-P_0^q\;.
$$
We first observe that for $m>1$ we have
$$
S^{m}=P_0^{m}-\sum_{l=0}^{m-1}S^{m-1-l}P_0\bigchi_K P_0^{l}=P_0^{m}-
\sum_{l=0}^{m-1}P_0^{m-l}\bigchi_K S^{l}\;.
$$
Combining these two expressions, we obtain easily
$$
S^{q}=P_0^{q}-\sum_{j=0}^{q-1}P_0^{q-j}\bigchi_K P_0^{j}
+\sum_{j=0}^{q-1}\sum_{l=0}^{j-1}
P_0^{q-j}\bigchi_K S^{j-1-l}P_0\bigchi_K P_0^{l}\;.
$$
Let now $q_K$ be the largest integer such that for any $01$, a number
$0<\alpha<1$ and a
positive number $\Gamma>1$ such that for any function
$g$ of bounded variation and for any integer $n$
$$
\bigvee\left(P_0^ng\right)\le C\alpha^n\vee g+\Gamma\|g\|
$$
where
$\|\cdot\|$ denotes the $L^1$ norm with respect to the normalized
Lebesgue measure of the interval $[0,1]$. We also recall that
$\|P_0g\|\le \|g\|$.
We now have for $0\le j\le q-1$ and for any $g$ of
bounded variation
$$
\bigvee\left(P_0^j\bigchi_K P_0^{q-j}g\right)\le C\alpha^j
\bigvee\left(\bigchi_K P_0^{q-j}g\right)+\Gamma \|\bigchi_K
P_0^{q-j}g\|
$$
$$
\le C\alpha^j \bigvee\left(P_0^{q-j}g\right)+2C
\alpha^j\|P_0^{q-j}g\|_\infty+ \epsilon\Gamma
\|P_0^{q-j}g\|_\infty\;,
$$
where $\|\cdot\|_\infty$ is the $L^\infty$ norm, and we have used the
simple estimate
$$
\vee(g_1g_2)\le \vee(g_1)\|g_2\|_\infty+\vee(g_2)\|g_1\|_\infty\;,
$$
together with $\vee\bigchi_K=2$.
We now continue using the Lasota Yorke estimate and we get using for a
function $u$ of bounded variation $\|u\|_\infty\le\vee(u)+\|u\|$ and
$\|P_0u\|\le\|u\|$
$$
\bigvee\left(P_0^j\bigchi_K P_0^{q-j}g\right)\le \Oun
\left(\alpha^{q-1}+\epsilon\alpha^{q-j} \right)\vee(g)
+\Oun\left(\alpha^j
+\epsilon \right)\|g\| \;.
$$
A similar estimate can be obtained for
the second term of
$\Delta_q$, either directly or simply observing that
for $\epsilon$ small we have
$$
\bigchi_K P_0 \bigchi_K= P_0 \bigchi_{\tilde K}
$$
where $\tilde K$ is also an interval. From this it follows that
$$
\bigvee\left(\Delta_q g\right)\le A
\left(q\alpha^{q}+\epsilon
\right)\vee(g) +A \left(1
+q\epsilon \right)\|g\| \;,\eqno(11)
$$
where $A$ is a
positive constant (independent of $q$, $g$ and $\epsilon$).
We have also
$$
\|P_0^j\bigchi_K P_0^{q-j}g\|\le \|\bigchi_K P_0^{q-j}g\|\le
\epsilon\|P_0^{q-j}g\|_\infty\le\epsilon\left(C\alpha^{q-j}
\vee(g)+\Gamma \|g\|\right)\;,
$$
which implies
immediately (with the similar estimate for the second term
of $\Delta_q$)
$$
\|\Delta_qg\|\le \epsilon B \left(
\vee(g)+q\|g\|\right)\;,
$$
where $B$ is a
positive constant (independent of $q$, $g$ and $\epsilon$).
The main observation is now that for a fixed (large) $q$,
if we take $\epsilon$
small enough, only the coefficient of $\|g\|$ in (11) is of order one,
all the
other coefficients are small. This suggests the introduction of
a balanced norm depending on a parameter $\theta>0$ given by
$$
\norm g\norm_\theta=\theta\vee(g)+\|g\|.
$$
Note that all these norms are equivalent and we have for
$\theta'>\theta$
$$
\norm g\norm_\theta\le \norm g\norm_{\theta'} \le{\theta'\over\theta}
\norm g\norm_\theta\;.
$$
We now apply
the spectral decomposition theorem to $P_0$ (see [K.]). We
conclude that
for any number $\xi\in]r_{1},1[$, there is a constant $\Psi$
which
depends only on $\xi$ (and which can be assumed larger than one),
there is a projection operator $\bigpi_0$ of rank one and an
operator $R$ satisfying $\bigpi_0R=R\bigpi_0=0$ and such that
$$
P_0=\bigpi_0+R\quad\hbox{and for any integer $n$}\quad \norm
R^n\norm_1\le\Psi \xi^n\;.
$$
In other words,
$\bigpi_0$ is the spectral part of $P_0$ corresponding to
the eigenvalue 1, and $R$ is the rest in the spectral decomposition.
Note that
$\bigpi_0$ is a rank one operator consisting of a linear form $F$
which is the integration on $[0,1]$ with the Lebesgue measure and an
eigenvector which is the constant function one. Therefore we have by
direct computations for any $\theta>0$, $\norm F\norm_\theta=1$ and
$\norm 1\norm_\theta=1$ which imply
$\norm \bigpi_{0}\norm_\theta=1$. For
the operator $R$ on the other hand we have only the trivial estimate
$$
\norm R^n\norm_\theta\le \Psi \theta^{-1}\xi^n
$$
for any integer $n$ and any $\theta>0$.
We are now finally in a position to choose the number $q$. We have of
course
$$
S^q=\bigpi_0+R^q+\Delta_q\;.
$$
Using
our above estimates we conclude that for any $\theta>0$, provided
$q\le q_K$ we have
$$
\norm R^q+\Delta_q\norm_\theta\le \theta^{-1}\Psi \xi^q+
A(q\alpha^{q}+\epsilon)+A\theta(1+q\epsilon)+
\epsilon B\theta^{-1}+q\epsilon B\;.\eqno(12)
$$
Now we see that if we could take $\theta$ small, the term of order one
coming form (11) will be small in the new norm. In order not to upset
too much the bound on the first term, we define an optimal $\theta$ by
$$
\theta=\theta(q)=\sqrt{\Psi \xi^q\over A}
\;.
$$
There are now
several possible choices for $q$. A first natural one will
be such that the right hand side of (12) will be small for $\epsilon$
small enough. Namely, one can take for example the smallest $q$ such
that
$$
2\sqrt{A\Psi\xi^q}
+Aq\alpha^{q}\le {1\over 50}\quad\hbox{\rm and}\quad \theta(q)<1\;.
$$
Denote this number by $q_1$.
Once this $q=q_1$ has been chosen (and the corresponding
$\theta_1=\theta(q_1)$ set as above), we have
$$
\norm R^q+\Delta_q\norm_{\theta_1}\le {1\over 50}
+\epsilon\left(A+A\theta(q_1)q_1+B\theta(q_1)^{-1}+Bq_1\right)\;.
$$
Now we choose $\epsilon_0$ by
$$
\epsilon_0={1\over 50( A+A\theta(q_1)q_1+B\theta(q_1)^{-1}+Bq_1)}\;.
$$
With this choice, we can apply perturbation theory
(using $\norm R^q+\Delta_q\norm_\theta< 1/24$, see [K.II. \S 3]
and $\norm\bigpi_0
\norm_\theta=1$) as in the proof of Proposition 5 to ensure
that outside
$D_{1/2}$ the spectrum of $S^{q_{1}}$ consists of a unique simple
eigenvalue $\lambda(\epsilon)$. For the eigenvector $h_K$ we have
$$
\norm h_0-h_K\norm_\theta\le\Oun
$$
which by the equivalence of norms (note that $q_1$ and hence
$\theta(q_1)$ do not depend on $\epsilon$) implies
$$
\vee h_K\le
\Oun \quad \hbox{and if $\;\|h_0\|=1$, }\quad \|h_K\|\le \Oun\;,
$$
uniformly in $\epsilon<\epsilon_0$.
A choice of $q$ leading to a finer estimate is the number
$q_2=\beta\log\epsilon^{-1}$ where
$\beta=\inf(1/\log2,1/\log\alpha^{-1},1/\log\xi^{-1})/2$, where
$\beta<{1\over 2}\log 2$ is to ensure condition (10).
Applying again perturbation theory with this choice of $q_2$ (and the
corresponding $\theta_2=\theta(q_2)$), we conclude using (12)
that for $\epsilon$ small
enough, if $g$ is a
function of bounded variation
$$
|\mu_K(g)-e_0(g)|\le \Oun\epsilon^{\gamma}\log\epsilon^{-1}
\norm g\norm_{\theta_2}
$$
where
$$
\gamma=\inf(\beta\log\xi^{-1}/2, 1-\beta\log\xi^{-1}/2,
\beta\log\alpha^{-1})\;.
$$
By definition this implies
$$
|\mu_K(g)-e_0(g)|\le\Oun
\epsilon^{\gamma}\log\epsilon^{-1}\norm g\norm\;.
$$
We also have from perturbation theory as in the proof of Proposition 5
$$
\norm
h_K-1\norm_{\theta_2}\le \Oun\epsilon^{\gamma}\log\epsilon^{-1}\;,
$$
hence from the definition of the $\theta$ norm
$$
\|h_K-1\|\le \Oun\epsilon^{\gamma}\log\epsilon^{-1}\;.
$$
As another consequence of perturbation theory, we have of course
$$
|\lambda(\epsilon)-1|\le \Oun\epsilon^{\gamma}\log\epsilon^{-1}\;,
$$
which implies $\lambda(\epsilon)\to 1$ when $\epsilon\to0$.
We also observe that $\mu_K$ is positive on positive functions,
therefore by density it extends to a positive functional on continuous
functions and is therefore a positive measure (which can be normalized
to a probability measure).
It is easy to verify that it is supported
by the invariant Cantor set
$$
\bigcap_{n=0}^{\infty}f_\epsilon^{-n}(K^{c})\;.
$$
i.e.
$\mu_{K}$ is supported by the Cantor set of
trajectories which never leave $K^{c}$.
From the above
estimate, we also conclude that $\mu_K$ converges weakly when
$\epsilon\to0$ to the SRB measure of $f_0$ which is the Lebesgue
measure.
If $u$ is a function of bounded variation, we have by definition
$$
\int\prod_0^{n}\bigchi_{K^{c}}\circ f_\epsilon^{j}\;(x)h_K(x)u(x)dx=
\int\bigchi_{K^{c}}(x)\;S^{n-1}(h_Ku)(x)dx=
$$
$$
\lambda(\epsilon)^{n-1}\left(\int\limits_{K^c}h_K(x)dx\right)
\;\mu_K(\bigchi_{K^{c}}h_Ku)
+o(\lambda(\epsilon)^{n-1})\;.
$$
From the identity
$$
S\left(
u\circ f_\epsilon\; \bigchi_{K^{c}}h_K\right)=P_\epsilon\left(u\circ f
_\epsilon
\;\bigchi_{K^{c}}h_K\right)=uP_\epsilon\left(\bigchi_{K^{c}}h_K\right)
=uS(h_K)=\lambda(\epsilon) u h_K
$$
we have
$$
\int\prod_0^{n}\bigchi_{K^{c}}\circ
f_\epsilon^{j}(x)\;u\circ f_\epsilon(x)\;h_K(x)dx=
\int\prod_0^{n-1}\bigchi_{K^{c}}
\circ f_\epsilon^{j}(x)\;u(x)\;S(h_K)(x)dx=
$$
$$
\lambda(\epsilon)^{n-1}\left(\int\limits_{K^c}h_K(x)dx\right)
\;\mu_K(h_Ku\bigchi_{K^{c}})
+o(\lambda(\epsilon)^{n-2})\;.
$$
Since the support of $\mu_K$ is contained in the interior of
$K^{c}$, we derive by letting $n$ tend to infinity that
$$
\mu_K(h_Ku\circ f_\epsilon)=\mu_K(h_Ku)\;,
$$
which is the
statement of the invariance of $h_Kd\mu_K$ for $f_\epsilon$.
We have also for a function $u$ of bounded variation
$$
{\int u\circ f_
\epsilon(x) \bigchi_{K^{c}}(x)h_K(x)dx\over \int \hbox{\rm 1}
\circ f_\epsilon(x)
\bigchi_{K^{c}}(x)h_K(x)dx}={ \int u(x) S h_K(x) dx\over
\int S h_K(x) dx}={\lambda
(\epsilon) \int u(x) h_K(x) dx\over \lambda(\epsilon) \int
h_K(x) dx}= \int u(x) h_K(x) dx,
$$
since $h_K$ has been normalized to have integral 1. This implies that
the measure
$h_Kdx$ (with support on $K^{c}$) is quasi invariant. \cqfd
Theorem 9
is now an immediate consequence of Theorem 13 and Proposition 12.
\bigskip
\beginsection{V. THE MARKOV CASE.}
This is a simple but instructive exercise. Choose $\epsilon=2^{-n}$,
then it is easy to verify that
the map $f$ is Markov. In fact the Markov partition is made up of
$n+2$ atoms given by
$$
A_j=\cases{[1-2^{-j},1-2^{-j-1}]& for $0\le j\le n$\cr
[1-2^{-n-1},1]& for $j=n+1$.\cr}
$$
If $h$ is a function
which belongs to the $\sigma$-algebra determined by
the above Markov partition, we denote by $h_j$ its value on the atom
$A_j$. The transfer operator acting on $h$ is easily seen to be given
by
$$
(P_\epsilon h)_j=\cases{(h_0+h_{j+1})/2& for $0\le j\le n-2$,\cr
h_0/2& for $j=n-1$,\cr
(h_0+h_{n}+h_{n+1})/2& for $j=n$,\cr
(h_0+h_{n}+h_{n+1})/2& for $j=n+1$.\cr}
$$
It follows easily from
this explicit expression that the two dimensional
linear subspace
$h_0=\cdots=h_{n-1}=0$ is invariant, and in this subspace,
the spectrum consists of a simple eigenvalue $1$ with eigenvector
$h_{n}=h_{n+1}=1$ and an eigenvalue $0$ with eigenvector
$h_{n}=-h_{n+1}=1$. Moreover,
the matrix for $P_\epsilon$ is triangular, and
one gets easily the eigenvalue equation for the rest of the spectrum
which is given
(except for the spurious root $\lambda(\epsilon)=1/2$) by
$$
2^{n+1}\lambda(\epsilon)^{n}(\lambda(\epsilon)-1)+1=0
$$
from which one can easily extract for large $n$ the behavior of the
largest eigenvalue which is given by
$$
\lambda(\epsilon)=1-2^{-(n+1)}+{\cal O}(n4^{-n})\;.
$$
It also follows easily that there is no other eigenvalue outside the
disk of radius $1/2+\Oun/n$.
\bigskip
{\bf Acknowledgments.}
The authors are indebted to program {\smallrm ECOS-CONICYT}.
S.M. was partially supported by
{\smallrm FONDECY T} 1940405. P.C. would like to thank for their kind
hospitality the Mittag
Leffler Institute and the Departamento de
Ingenier\'{\i}a Matem\'atica of the Universidad de Chile where part
of this work was done.
The authors are grateful to an anonymous referee for suggesting many
useful improvements of the text.
\beginsection{REFERENCES.}
\item{[B.K.]} M.Blank, G.Keller. Stochastic stability versus
localization in chaotic dynamical systems. Preprint 1996.
\item{[B.Y.]} V.Baladi, L.-S.Young. On the spectra of randomly
perturbed expanding maps. Commun. Math. Phys.
{\bf 156}, 355-385 (1993).
\item{[B.I.S.]} V.Baladi, S.Isola, B.Schmitt. Transfer operator for
piecewise affine approximations of interval
maps. Ann. Inst.
H. Poincar\'e, Physique Th\'eorique, {\bf 62}, 251-266
(1995).
\item{[B.K.S.]} V.Baladi, A.Kondah, B.Schmitt. Random correlations for
small perturbations of expanding maps. Random and Computational
Dynamics, to appear.
\item{[B.W.Z.]} M.N.Bussac, R.B.White, L.Zuppiroli. Particle and heat
transport in a partially stochastic magnetic filed. Physics Letters A,
{\bf190}, 101-105 (1994).
\item{[C.]} P.Collet. Some Ergodic Properties of Maps of the Interval.
In ``Dynamical Systems \& Frustrated Systems'', R.Bamon, J.-M.Gambaudo
and S.Mart\'{\i}nez editors, to appear.
\item{[C.G.]} P.Collet, A.Galves.
Asymptotic distribution of entrance times for expanding
maps of the interval. {\sl Dynamical Systems and
Applications.} R.P.Agarwal editor, World Scientific 1995.
\item{[C.M.SM.]} P.Collet,
S.Mart\'{\i}nez, J.San Mart\'{\i}n. Asymptotic
laws for
one dimensional diffusions conditioned to non absorption. Ann. of
Prob. {\bf 23}, 1300-1314 (1995).
\item{[F.K.M.P.]} P.Ferrari,
H.Kesten, S.Mart\'{\i}nez, P.Picco. Existence of
quasi-stationary distributions. A renewal dynamical approach. Ann. of
Prob. {\bf 23}, 501-521 (1995).
\item{[F.W.]} M.Freidlin, A.Wentzell. {\sl Random perturbations of
dynamical systems}. Springer, Berlin Heidelberg New York 1984.
\item{[H.K.]} F.Hofbauer, G.Keller. Ergodic properties of invariant
measures for piecewise monotonic transformations. Math. Z. {\bf180},
119-140 (1982).
\item{[K.]} T.Kato. {\sl Perturbation Theory for Linear
Operators}. Springer, Berlin Heidelberg New York 1966.
\item{[Ke.]} G.Keller. Stochastic stability in some chaotic dynamical
systems. Mh. Math. {\bf94}, 313-333 (1982).
\item{[K.R.]} M.G. Krein, M.A. Rutman.
Linear operators leaving invariant a cone in a Banach
space. Amer. Math. Soc. Transl. {\bf Ser. 1, 10}, 199-225 (1962).
\item{[L.Y.]}
A.Lasota, J.Yorke. On the existence of invariant measures
for piecewise monotone transformations. Trans. Amer. Math. Soc. {\bf
186}, 481-488 (1973).
\bye
**