removal {FSA}R Documentation

Population estimates for k-, 3-, or 2-pass removal data.


Computes estimates, with confidence intervals, of the population size and probability of capture from the number of fish removed in k-, 3-, or 2-passes in a closed population.


  method = c("CarleStrub", "Zippin", "Seber3", "Seber2", "RobsonRegier2", "Moran",
    "Schnute", "Burnham"),
  alpha = 1,
  beta = 1, = c("Zippin", "alternative"),
  conf.level = 0.95,
  just.ests = FALSE,
  Tmult = 3,
  CIMicroFish = FALSE

## S3 method for class 'removal'
  parm = c("No", "p", "p1"),
  digits = getOption("digits"),
  verbose = FALSE,

## S3 method for class 'removal'
  parm = c("No", "p"),
  level = conf.level,
  conf.level = NULL,
  digits = getOption("digits"),
  verbose = FALSE,



A numerical vector of catch at each pass.


A single string that identifies the removal method to use. See details.


A single numeric value for the alpha parameter in the CarleStrub method (default is 1).


A single numeric value for the beta parameter in the CarleStrub method (default is 1).

A single string that identifies whether the SE in the CarleStrub method should be computed according to Seber or Zippin.


A single number representing the level of confidence to use for constructing confidence intervals. This is sent in the main removal function rather than confint.


A logical that indicates whether just the estimates (=TRUE) or the return list (=FALSE; default; see below) is returned.


A single numeric that will be multiplied by the total catch in all samples to set the upper value for the range of population sizes when minimizing the log-likelihood and creating confidence intervals for the Moran and Schnute methods. Large values are much slower to compute, but values that are too low may result in missing the best estimate. A warning is issued if too low of a value is suspected.


A logical that indicates whether the t value used to calculate confidence intervals when method="Burnham" should be rounded to two or three decimals and whether the confidence intervals for No should be rounded to whole numbers as done in MicroFish 3.0. The default (=FALSE) is to NOT round the t values or No confidence interval. This option is provided only so that results will exactly match MicroFish results (see testing).


An object saved from removal().


A specification of which parameters are to be given confidence intervals, either a vector of numbers or a vector of names. If missing, all parameters are considered.


A single numeric that controls the number of decimals in the output from summary and confint.


A logical that indicates whether descriptive labels should be printed from summary and if certain warnings are shown with confint.


Additional arguments for methods.


Not used, but here for compatibility with generic confint function.


The main function computes the estimates and associated standard errors, if possible, for the initial population size, No, and probability of capture, p, for eight methods chosen with method=. The possible methods are:

Confidence intervals for the first five methods are computed using standard large-sample normal distribution theory. Note that the confidence intervals for the 2- and 3-pass special cases are only approximately correct if the estimated population size is greater than 200. If the estimated population size is between 50 and 200 then a 95% CI behaves more like a 90% CI.

Confidence intervals for the next two methods use likelihood ratio theory as described in Schnute (1983) and are only produced for the No parameter. Standard errors are not produced with the Moran or Schnute methods.

Confidence intervals for the last method are computed as per Ken Burnham's instructions for the Burnham Method (Jack Van Deventer, personal communication). Specifically, they are calculated with the t-statistic and No-1 degrees of freedom. Please note that the MicroFish software rounds the t-statistic before it calculates the confidence intervals about No and p. If you need the confidence interals produced by FSA::removal to duplicate MicroFish, please use CIMicroFish=TRUE.


A vector that contains the estimates and standard errors for No and p if just.ests=TRUE or (default) a list with at least the following items:

In addition, if the Moran or Schnute methods are used the list will also contain


The Carle-Strub method matches the examples in Carle and Strub (1978) for No, p, and the variance of No. The Carle-Strub estimates of No and p match the examples in Cowx (1983) but the SE of No does not. The Carle-Strub estimates of No match the results (for estimates that they did not reject) from Jones and Stockwell (1995) to within 1 individual in most instances and within 1% for all other instances (e.g., off by 3 individuals when the estimate was 930 individuals).

The Seber3 results for No match the results in Cowx (1983).

The Seber2 results for No, p, and the SE of No match the results in example 7.4 of Seber (2002) and in Cowx (1983).

The RobsonRegier2 results for No and the SE of NO match the results in Cowx (1983)

The Zippin method results do not match the examples in Seber (2002) or Cowx (1983) because removal uses the bias-corrected version from Carle and Strub (1978) and does not use the tables in Zippin (1958). The Zippin method is not yet trustworthy.

The Moran and Schnute methods match the examples in Schnute (1983) perfectly for all point estimates and within 0.1 units for all confidence intervals.

The Burnham method was tested against the free (gratis) Demo Version of MicroFish 3.0. Powell Wheeler used R to simulate 100, three-pass removal samples with capture probabilities between 0 and 1 and population sizes <= 1000. The Burnham method implemented here exactly matched MicroFish in all 100 trials for No and p. In addition, the CIs for No exactly matched all 100 trials when CIMicroFish=TRUE. Powell was not able to check the CIs for p because the MicroFish 'Quick Population Estimate' does not report them.

IFAR Chapter

10-Abundance from Depletion Data.


Derek H. Ogle,

A. Powell Wheeler,


Ogle, D.H. 2016. Introductory Fisheries Analyses with R. Chapman & Hall/CRC, Boca Raton, FL.

Carle, F.L. and M.R. Strub. 1978. A new method for estimating population size from removal data. Biometrics, 34:621-630.

Cowx, I.G. 1983. Review of the methods for estimating fish population size from survey removal data. Fisheries Management, 14:67-82.

Moran, P.A.P. 1951. A mathematical theory of animal trapping. Biometrika 38:307-311.

Robson, D.S., and H.A. Regier. 1968. Estimation of population number and mortality rates. pp. 124-158 in Ricker, W.E. (editor) Methods for Assessment of Fish Production in Fresh Waters. IBP Handbook NO. 3 Blackwell Scientific Publications, Oxford.

Schnute, J. 1983. A new approach to estimating populations by the removal method. Canadian Journal of Fisheries and Aquatic Sciences, 40:2153-2169.

Seber, G.A.F. 2002. The Estimation of Animal Abundance. Edward Arnold, second edition (Reprint).

van Dishoeck, P. 2009. Effects of catchability variation on performance of depletion estimators: Application to an adaptive management experiment. Masters Thesis, Simon Fraser University. [Was (is?) from]

Van Deventer, J.S. 1989. Microcomputer Software System for Generating Population Statistics from Electrofishing Data–User's Guide for MicroFish 3.0. USDA Forest Service, General Technical Report INT-254. 29 p. [Was (is?) from

Van Deventer, J.S., and W.S. Platts. 1983. Sampling and estimating fish populations from streams. Transactions of the 48th North American Wildlife and Natural Resource Conference. pp. 349-354.

See Also

See depletion for related functionality.


## First example -- 3 passes
ct3 <- c(77,50,37)

# Carle Strub (default) method
p1 <- removal(ct3)

# Moran method
p2 <- removal(ct3,method="Moran")
# Schnute method
p3 <- removal(ct3,method="Schnute")

# Burnham method
p4 <- removal(ct3,method="Burnham")
## Second example -- 2 passes
ct2 <- c(77,37)

# Seber method
p4 <- removal(ct2,method="Seber2")

### Test if catchability differs between first sample and the other samples
# chi-square test statistic from  negative log-likelihoods
#   from Moran and Schnute fits (from above)
chi2.val <- 2*(p2$min.nlogLH-p3$min.nlogLH)
# p-value ... no significant difference

# Another LRT example ... sample 1 from Schnute (1983)
ct4 <- c(45,11,18,8)
p2a <- removal(ct4,method="Moran")
p3a <- removal(ct4,method="Schnute")
chi2.val <- 2*(p2a$min.nlogLH-p3a$min.nlogLH)  # 4.74 in Schnute(1983)
pchisq(chi2.val,df=1,lower.tail=FALSE)         # significant difference (catchability differs)

### Using lapply() to use removal() on many different groups
###   with the removals in a single variable ("long format")
## create a dummy data frame
lake <- factor(rep(c("Ash Tree","Bark","Clay"),each=5))
year <- factor(rep(c("2010","2011","2010","2011","2010","2011"),times=c(2,3,3,2,2,3)))
pass <- factor(c(1,2,1,2,3,1,2,3,1,2,1,2,1,2,3))
catch <- c(57,34,65,34,12,54,26,9,54,27,67,34,68,35,12)
d <- data.frame(lake,year,pass,catch)

## create a variable that indicates each different group
d$group <- with(d,interaction(lake,year))
## split the catch by the different groups (creates a list of catch vectors)
ds <- split(d$catch,d$group)
## apply removal() to each catch vector (i.e., different group)
res <- lapply(ds,removal,just.ests=TRUE)
res <- data.frame(t(data.frame(res,check.names=FALSE)))
## get rownames from above and split into separate columns
nms <- t(data.frame(strsplit(rownames(res),"\\.")))
attr(nms,"dimnames") <- NULL
fnl <- data.frame(nms,res)
## put names together with values
rownames(fnl) <- NULL
colnames(fnl)[1:2] <- c("Lake","Year")

### Using apply() to use removal() on many different groups
###   with the removals in several variables ("wide format")
## create a dummy data frame (just reshaped from above as
## an example; -5 to ignore the group variable from above)
d1 <- reshape(d[,-5],timevar="pass",idvar=c("lake","year"),direction="wide")
## apply restore() to each row of only the catch data
res1 <- apply(d1[,3:5],MARGIN=1,FUN=removal,method="CarleStrub",just.ests=TRUE)
res1 <- data.frame(t(data.frame(res1,check.names=FALSE)))
## add the grouping information to the results
fnl1 <- data.frame(d1[,1:2],res1)
## put names together with values
rownames(fnl1) <- NULL

[Package FSA version Index]