Pity the poor researcher who, after his painstakinglyobtained large sample, does not observe any flies are infected  or that any individuals had leukaemia AND lived near an incinerator. It is no joke, and surprisingly common when dealing with rare phenomena. Having found yourself in that situation your immediate impulse may be to discard your data and forget the entire thing, or go back and gather a lot more observations  if such be possible.
Nevertheless, and extraordinary as it may seem, you CAN attach confidence limits to a zero count or zero proportion  formulae do exist. But rather than merely providing these, let us invest a few minutes considering why it is possible, what is being assumed, and how such intervals can mislead us.
For sake of argument let us assume we have a sample of (n=) 10000, of which (f=p=) 0 are positive  and try to obtain binomial and Poisson confidence limits. All the results below were obtained from standard formulae  except for the midPexact, which used interpolated Pvalues from 100000 2tailed tests, whose null parameters (P or F) were uniformly distributed.
Type of interval  Lower 95% limit  Upper 95% limit  Comments  See

Normal approximation to binomial (Wald)  0  0  Collapsed because estimated SE of p is 0


Wald + continuity correction
 0.39995  0.40005  Does not enclose p=0


Adjusted Wald
 − 0.00007839174  0.000470234  This lower limit is impossible


Wilson score
 0  0.0003839984  Approximates midPexact


Clopper Pearson exact binomial formula  NaN  0.00003688811  Lower limit missing because F cannot have (v2=) 0 df.


MidP exact binomial by testinversion
 NaN  0.0002996524  No lower limit because no P < p (see below)


Normal approximation to Poisson
 0  0  Collapsed because estimated SE of f is 0


'Bias corrected' approximate + continuity correction
 0.9604  3.9204  Does not enclose p=0


Conventional exact Poisson formula
 NaN  3.688879  χ^{2} cannot have 0 df.


MidP exact Poisson by testinversion
 NaN  2.995732  No lower limit because no P < p (see below)


Given these results you might reasonably wonder which of them is 'best'  especially if you regard exact tests as the 'gold standard'. In answering that we would be wise to consider what properties we would expect a 95% confidence interval to have. A logical place to begin this is the usual situation, where we assume our statistic has correctly estimated its parameter. After all, this is what we are assuming when we estimate the standard error of p from its observed value when attaching normal approximation limits, or estimating simple percentile bootstrap limits.
When p = f = P = F = 0
To avoid confusion, let us use P to indicate the true probability of obtaining a 1 (or the true proportion of infected insects) and use F to indicate (λ=) Pn  where n is the number of insects in our sample (even if n is extremely large and undefined).
One immediate conclusion is, if none are infected (P = F = 0), it would be a waste of time gathering more data. Even so, if a confidence interval could be calculated, what properties would it have? To answer this we have to assume the ONLY source of variation is due to random sampling (so there is, for instance, no assignment error). In that situation if none are infected, every sample will yield the same result, and applying the same formula will yield the same confidence limits.
The inescapable result of this is, depending upon which formula you chose, the resulting interval would ALWAYS enclose the parameter  or would NEVER enclose it.
For instance the interval 0 to 0 does not enclose 0, nor does the interval 0.39995 to 0.40005, nor does 0.9604 to 3.9204 , but the interval − 0.00007839174 to 0.000470234 would always enclose 0.
Yet, under conventional inference a 95% interval is assumed to enclose the parameter on 95% of occasions.
Therefore, in the absence of any additional variation (such as 'jittering') we cannot validly calculate a 95% interval when p = f = P = F = 0  irrespective of what method we use (approximate, score, exact, conventional, or midP). So in order to attach confidence limits to p = f = 0, we must assume F > 0 and P > 0.
When p = f = 0 but P > 0 and F > 0
Now consider the situation where our estimate is NOT the same as the parameter being estimated.
In other words you accept that, simply because your sample did not contain an infected fly, you should not assume there were no infected flies to be sampled.
Provided
P<1 it is clearly possible for us to observe p = f = 0. But, whilst the usual normal approximations do not produce usable intervals for such samples, we need to see why exact intervals have difficulties with the lower confidence limit. Asserting the
Fdistribution function or χ
^{2}function cannot have 0 degrees of freedom may be mathematically correct, but it does not tell you why testinversion fails to yield a lower interval.
To understand why this is so, you have to understand that exact intervals are equivalent to test inversion intervals. Consider for example the Pvalue plot below.
{Fig. 1}
As you might expect, the midP upper confidence limit falls inside the conventional upper confidence limit. But we were unable to estimate a lower confidence limit for either conventional exact or midP exact. The reason for this is that no value of the test parameter (P) can be less than our observed value of p=0. Therefore the observed result could only be tested against the LOWER TAIL of the binomial null distribution  to test p against the upper tail, p must be greater than P (or F < f). As a result, both Pvalue plots are Lshaped rather than Λshaped,
and only their upper tail intersects (α/2=) 0.025.
This problem does not arise with 'normal' confidence limits because you are very unlikely to observe a result equal to minus infinity (the probability is 1/infinity) so you can always test your result against a parameter which is less than it. You could of course argue that if the true proportion of infected flies (P) is greater than zero, you could still have a lower confidence limit somewhere between 0 and P.
This would have two results:
 Your confidence limits would not enclose your observed result p=0.
 Testing p assuming your lower confidence limit was correct would not reject p  even if you set that lower confidence limit to zero.
Therefore unless you are prepared to accept confidence limits that do not enclose the observed result, and do not comply with the most general (testinversion) definition of a confidence interval, your only recourse is to concentrate upon just one limit (the upper one)  and try to ensure that, for 95% of samples, the parameter lies between that limit and zero.
Upper onesided confidence limits are very important theoretically, and are perfectly valid  provided the parameter lies between the lower bound (in this case P = F = 0) and a single limit, when calculated for 95% of samples.
For normallydistributed statistics, the lower bound is minus infinity (not zero), and the upper limit is expected to fall above the population's mean on 95% of occasions.
One obvious difficulty in calculating a 1sided limit is conventional procedures find where
Pvalues intersect the (α/2=) 0.025 level, not the (α=) 0.05 level. So, when the lower bound (zero) cannot vary, even if everything else performed as it ought, our nominal coverage would be 97.5% rather than 95%. In other words, when calculating a 1sided 95% confidence limit, ALL of α must apply to that limit (not half of it).
For example if we observed p = f = 0 in a sample of n=10000, provided we accept the lower bound is always zero, we can obtain the following 1sided upper 95% intervals:
(Conventional limits are shaded yellow.)
 Wilson score: 0.0002704812
 Clopper Pearson exact binomial: 0.0002995284
 Conventional exact Poisson: 2.995732
 MidPexact binomial: 0.0002303566
 MidPexact Poisson: 2.302585
 Hanley's: 0.0002995284
 Our midPequivalent 0.000230232
Notice the 1sided Clopper Pearson upper 95% limit is the same as Hanley's, and (because n was large) the conventional exact Poisson is approximately n times that value (F = Pn ≅ 3). When n is greater than 30 Hanley's 1sided upper limit (1 − α^{1/n}) is approximately n/3. This is rather useful, given the usual normal approximation limits fail when p = f = 0. Since the reasoning behind his formula is instructive, and enables us to find a midPequivalent, we shall briefly explore it.
To simplify matters for ourselves, let us start by considering the situation where your sample comprises just (n=) 1 randomly selected binary value  in other words it represents a Bernoulli distribution, which is a special case of the binomial distribution where n=1.
Now recall that we are using P as shorthand for the probability that a randomlyselected observation equals 1, or P(f = 1). Notice also that since p = f/n, the observed value of p CANNOT be less than zero, therefore our (conventional) Pvalue is simply the probability that (for a given value of P) a randomlyselected observation equals zero, or P(f = 0). In other words we can calculate that probability using the mass probability function  rather than having to use a cumulative function.
For binary values there is a very straightforward relationship between those two probabilities since the probability of observing f = 0, P(f = 0) equals 1 − P(f = 1). So, when n = 1, the Pvalue is 1 − P. This (conventional) relationship, is shown below (left), and compared to its a Pvalue plot obtained by testinversion (below right).
Dotted lines show the equivalent relationship for midPvalues.
{Fig. 2}
In the right graph our confidence limit is the probability (P) of observing f=1 that results a Pvalue which equals α = 0.05  let us call that probability P_{lim}. Notice that, because n = 1, P_{lim} = 1 − α and α = 1 − P_{lim}.
Now consider what happens when, instead of just n = 1 binary value, your sample comprises n independentlyselected values. Obviously, if n is large, you are correspondingly less likely to observe p = f = 0. Provided your selection was random and independent, given P is the probability of observing f = 1 after n = 1 event, the probability of observing f = n events is P^{n}, and the probability of observing f = 0 events is (1 − P)^{n}.
In which case:  α = (1 − P_{lim})^{n} 
To find the value of P
_{lim} we must rearrange that expression,
so:  α^{1/n} = 1 − P_{lim
} 

hence:  P_{lim} = 1 − α^{1/n
} 
Which is Hanley's limit (given above).

Applying the same reasoning to midPvalues, given the conventional Pvalue for n=1 observation is P(f = 0), since the midPvalue is P(f = 0)/2 the slope (shown as a dotted line in Fig. 2) is only half as steep. As a result, P_{lim} equals 1 − 2α and 2α = 1 − P_{lim}.
Following our earlier reasoning, for a sample of n binary values, 2α = (1 − P_{lim})^{n}. Rearranging that we conclude P_{lim} is 1 − (2α)^{1/n}. This is our midPequivalent to Hanley's confidence limit. For samples of more than (n=) 30 observations, our midPequivalent 95% limit is approximately n/2.3
Note:
If it is crucial that your intervals are strictly conservative, you may prefer to ignore the testinversion arguments and calculate 2sided binomial 95% limits as follows:
 When p=0, CL=0, CU=1 − (α/2)^{1/n} (or if n>30, about 3.69/n).
 When p=1, CL=(α/2)^{1/n}, (or if n>30, about 1 − 3.69/n), CU=1