Extrapolation

In addition, extrapolation of a dose in a rodent (rat or mouse) to a comparable dose (or exposure) in humans requires either extensive pharmacokinetic data in both rodents and humans, or a series of assumptions.

From: International Encyclopedia of Public Wellness , 2008

Projections and Risk Assessment

Morton Glantz , Johnathan Mun , in Credit Engineering for Bankers (2d Edition), 2011

Nonlinear Extrapolation

Theory

Extrapolation involves making statistical forecasts by using historical trends that are projected for a specified menses of time into the hereafter. It is only used for time-serial forecasts. For cross-sectional or mixed console information (time-serial with cross-sectional data), multivariate regression is more appropriate. This methodology is useful when major changes are not expected; that is, causal factors are expected to remain constant or when the causal factors of a situation are non clearly understood. It also helps discourage the introduction of personal biases into the process.

Extrapolation is fairly reliable, relatively simple, and inexpensive. However, extrapolation, which assumes that contempo and historical trends volition continue, produces large forecast errors if discontinuities occur within the projected time menstruum; that is, pure extrapolation of fourth dimension-series assumes that all nosotros demand to know is independent in the historical values of the series beingness forecasted. If we assume that past beliefs is a good predictor of future behavior, extrapolation is appealing. This makes it a useful arroyo when all that is needed are many brusk-term forecasts.

This methodology estimates the f(x) function for any arbitrary x value past interpolating a smooth nonlinear curve through all the x values and, using this smooth curve, extrapolates hereafter x values beyond the historical data fix. The methodology employs either the polynomial functional grade or the rational functional form (a ratio of 2 polynomials). Typically, a polynomial functional grade is sufficient for well-behaved data, but rational functional forms are sometimes more than accurate (specially with polar functions—i.due east., functions with denominators approaching nada).

Procedure

Start Excel and enter your data, or open an existing worksheet with historical data to forecast (Figure 8.eleven uses the file Nonlinear Extrapolation from the examples folder).

Figure 8.eleven. Running a Nonlinear Extrapolation.

Select the fourth dimension-series information and select Risk Simulator | Forecasting | Nonlinear Extrapolation.

Select the extrapolation type (automatic selection, polynomial function, or rational function are available, but in this example utilize automatic selection), enter the number of forecast menstruum desired (see Figure 8.11), and click OK.

Results Interpretation

The results study in Effigy 8.12 shows the extrapolated forecast values, the error measurements, and the graphical representation of the extrapolation results. The error measurements should be used to bank check the validity of the forecast and are especially important when used to compare the forecast quality and accuracy of extrapolation versus time-series analysis.

Effigy 8.12. Nonlinear Extrapolation Results.

Notes

When the historical data are polish and follow some nonlinear patterns and curves, extrapolation is amend than time-serial analysis. Even so, when the information patterns follow seasonal cycles and a trend, time-serial analysis will provide ameliorate results. It is e'er advisable to run both fourth dimension-series analysis and extrapolation and compare the results to come across which has a lower error mensurate and a better fit.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123785855100089

Regression

Jonathan P. Pinder , in Introduction to Business Analytics using Simulation, 2017

10.seven.ii Extrapolation Across the Relevant Range

Extrapolation beyond the relevant range is when values of Y are estimated beyond the range of the X data. If the unobserved data (data outside the range of the X data) is nonlinear then the estimates of Y can be significantly exterior the confidence interval of the estimated Y values.

Consider the production cost example (Example 11.1). The smallest amount previously produced was 10,000 units. Whatever endeavour to estimate costs below that production point might have significantly errors due to potential nonlinearities in cost in the production range less than previously produced. That is, the line might bend outside the range of the data.

The exception to this caveat is fourth dimension series forecasting (Chapter 12). In fourth dimension serial forecasting, the objective is to gauge values of Y beyond the range of the X information such as estimate adjacent year'southward sales.

Read full affiliate

URL:

https://www.sciencedirect.com/science/article/pii/B9780128104842000104

Census

John R. Weeks , in Encyclopedia of Social Measurement, 2005

Population Projections

A population project is the adding of the number of people who could exist alive at a future date given the number now alive and given assumptions near the futurity course of mortality, fertility, and migration. In many respects, population projections are the single nearly useful set up of tools available in demographic analysis. By enabling researchers to come across what the future size and composition of the population might be under varying assumptions nearly trends in bloodshed, fertility and migration, it is possible intelligently to evaluate what the likely course of events might be many years from now. Also, past projecting the population frontward through fourth dimension from some signal in history, it is possible to speculate on the sources of change in the population over time. Information technology is useful to distinguish projections from forecasts. A population forecast is a statement nearly what the future population is expected to be. This is dissimilar from a projection, which is a statement about what the future population could be nether a given set of assumptions. At that place are two master ways to projection populations: (1) extrapolation methods and (2) the cohort component method.

Extrapolation

Extrapolation methods are an adaptation of Eq. (1). It assumes that some charge per unit of growth will hold abiding between the base year (P ane—the population in the starting time year of a population projection) and the target year (the year to which a population is projected forrad in time). We then calculate the projected population at time 2 (P two) as follows:

(33) p 2 = p 1 × e rn

If P 1 is the population of the United States in 2000 (281,421,916) and it is assumed that the rate of population growth between 1990 and 2000 (1.2357% per year) will go on until 2050, then the projected population in the year 2050 (P two) will exist 522,014,164. Actually, we would rarely use this method for national projections, but it tin be used for local areas where data are not bachelor on births, deaths, and migration.

Cohort Component Method

The extrapolation methods of population projection do not accept into account births, deaths, or migration. If assumptions can be made near the trends in these demographic processes, and so the population can exist projected using the more sophisticated accomplice component method. This method begins with a distribution of the population by historic period and sex activity (in absolute frequencies, not percentages) for a specific base year. The method too requires age-specific mortality rates (that is, a base-year and intermediate-year life table); age-specific fertility rates; and, if possible, historic period-specific rates of in- and out-migration. Cohorts are commonly arranged in 5-yr groups, such as ages 0–4, 5–nine, x–xiv, and so on, which facilitates projecting a population forward in time in 5-year intervals. The project and then is undertaken with matrix algebra.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0123693985002735

Causal Inference and Medical Experiments

Daniel Steel , in Philosophy of Medicine, 2011

4 External Validity And Extrapolation

Experiments typically aim to draw conclusions that extend beyond the immediate context in which they are performed. For instance, a clinical trial of a new handling for breast cancer will aim to describe some conclusion almost the effectiveness of the handling amidst a large population of women, and not merely nearly its effectiveness amid those who participated in the written report. Similarly, a report of the carcinogenic effects of a compound on mice usually aims to provide some indication of its furnishings in humans. External validity has to do with whether the causal relationships learned nearly in the experimental context can be generalized in this fashion. External validity is an peculiarly obvious challenge in research involving beast models, wherein it is referred to as "animal extrapolation" (cf. [ Calabrese, 1991; Steel, 2008]). This section will focus on external validity equally information technology concerns beast research, since at that place is a more than extensive literature on that topic.

Any extrapolation is an inference past analogy from a base to a target population. In animal extrapolation, the base is an creature model (say, laboratory mice) while humans are unremarkably the target population. In the cases of concern here, the claim at issue in the extrapolation is a causal generalization, for instance, that a item substance is carcinogenic or that a new vaccine is effective. The most straightforward approach to extrapolation is what can be called "simple induction." Simple induction proposes the following rule:

Assume that the causal generalization true in the base of operations population also holds approximately in related populations, unless there is some specific reason to think otherwise.

In other words, simple consecration proposes that extrapolation be treated as a de-fault inference among populations that are related in some appropriate sense. There are, however, three aspects of the above characterization of uncomplicated induction that stand up in obvious demand of farther clarification. In detail, to apply the above rule in any physical case, ane needs to decide what it is for a causal general-ization to hold approximately, to distinguish related from unrelated populations, and to know what counts as a reason to think that the extrapolation would not exist advisable. It seems doubtful that a great deal can be said virtually these three bug in the abstruse — the indicators of related populations, for case, tin be expected to exist rather domain specific. Merely information technology is possible to give examples of the sorts of considerations that may come into play.

Simple induction does non enjoin 1 to infer that a causal relationship in 1 population is a precise guide to that in another — it simply licenses the conclusion that the human relationship in the related target population is "approximately" the aforementioned every bit that in the base of operations population. Information technology is easy to see that some qualification of this sort is needed if elementary induction is to be reasonable. Controlled experiments generally attempt to estimate a causal effect, that is, the probability distribution of an effect variable given interventions that manipulate the cause (cf. [Pearl, 2000, p. seventy]).

In biology and social science, information technology is rare that a causal upshot in one population is exactly replicated even in very closely related populations, since the probabilities in question are sensitive to changes in groundwork weather condition. Yet, it is not rare that various qualitative features of a causal effect, such as positive relevance, are shared beyond a wide range of populations. For example, tobacco fume is a carcinogen among many human and non-man mammal populations. Other qualitative features of a causal consequence may also be widely shared; for instance, a drug therapy may promote an effect in moderate dosages simply inhibit information technology in large ones across a variety of species even though the precise effect differs from one species to the next. In other cases, the judge similarity may as well refer to quantitative features of the causal consequence — the quantitative increase in the chance of lung cancer resulting from smoking in one population may be a reasonably skilful indicator of that in other closely related populations. In the case of extrapolation from animal models, it is common to take into account scaling effects due to differences in body size, since ane would expect that a larger dose would be required to achieve the aforementioned result in a larger organism (cf. [Watanabe et al., 1992]). Thus, in such cases, the scaling adjustment would constitute part of what is covered by the "approximately." Depending on the context, the term "gauge" could refer to similarity with regard to any one of the aspects of the causal effect mentioned to a higher place, or other aspects, or any combination of them.

Simple consecration is as well restricted in allowing extrapolations simply among related populations, a qualification without which the rule would obviously be unreasonable: no population can serve as a guide for every other. In biology, phylogenetic relationships are often used as a guide to relatedness for purposes of extrapolation: the more recent the last common antecedent, the more closely related the 2 species are (cf. [Calabrese, 1991, pp. 203-4]). A phylogenetic standard of relatedness also suggests some examples of what might count as a specific reason to think that the base population is not a reliable guide for the target population. If the causal relationship depends upon a feature of the model non shared past its most recently shared common ancestor with the target, then that is a reason to suspect that the extrapolation may be ill founded.

In many biological examples, the simple induction requires but some relatively minimal background knowledge apropos the phylogenetic relationships among the base and target populations, and its master advantage lies in this frugality of information demanded for extrapolation. Notwithstanding the weakness of the simple inductive strategy besides lies in exactly this frugality: given the rough criteria of relatedness, the strategy volition inevitably produce many mistaken extrapolations. According to i review of results concerning interspecies comparisons of carcinogenic effects:

Based on the experimental evidence from the CPDB [Carcinogenic Potency Database] involving prediction from rats to mice, from mice to rats, from rats or mice to hamsters, and from humans to rats and humans to mice, … one cannot presume that if a chemical induces tumors at a given site in one species it will also exist positive and induce tumors at the same site in a second species; the likelihood is at about 49% [Gold et al., 1992, p. 583].

A related challenge for the simple induction is that it is not rare that there are pregnant differences across singled-out model organisms or strains. For instance, aflatoxin Bane causes liver cancer in rats but has little carcinogenic effect in mice [Gilt et al., 1992, pp. 581-two; Hengstler et al., 2003, p. 491]. One would expect that extrapolation by uncomplicated induction is more oftentimes justified when the inference is from human to human than when it is from animal to human being. But the difference here is probable one of caste rather than kind, since a diversity of factors (e.g. gender, race, genetics, diet, environment, etc.) tin induce singled-out responses to the aforementioned crusade among human populations. Thus, it is of interest to ask what grounds there are, if any, for extrapolation other than simple consecration.

Every bit one might expect, there are more and less optimistic answers to this question in the literature on fauna extrapolation. On the more optimistic side, there are discussions of some circumstances that facilitate and some that hinder extrapolation, oftentimes presented in connection with detailed instance studies. For case, it has been observed that extrapolation is on firmer footing with respect to bones, highly conserved biological mechanisms [Wimsatt, 1998; Schaffner, 2001; Weber, 2005, pp. 180-four]. Others accept observed that a close phylogenetic relationship is not necessary for extrapolation and that the apply of a particular creature model for extrapolation must be supported by empirical bear witness [Burian, 1993]). 8 These suggestions are quite sensible. The conventionalities that some fundamental biological mech-anisms are very widely conserved is no uncertainty a motivating premise underlying piece of work on such simple model organisms equally the nematode worm. And it is certainly right that the appropriateness of a model organism for its intended purpose is non something that may but be assumed but a claim that requires empirical support.

Yet the above suggestions are non probable to satisfy those who take a more pes-simistic view of creature extrapolation. Objections to fauna extrapolation focus on causal processes that do not fall into the category of key, conserved biological mechanisms. For case, Marcel Weber suggests that mechanisms be conceived of every bit embodying a hierarchical structure, wherein the components of a higher-level mechanism consist of lower-level mechanisms, and that while lower-level mechanisms are often highly conserved, the aforementioned is not true of the higher-level mechanisms formed from them [2001, pp. 242-3; 2005, pp. 184-6]. So, even if i agreed that bones mechanisms are highly conserved, this would practise little to justify extrapolations from mice, rats, and monkeys to humans regarding such matters equally the prophylactic of a new drug or the effectiveness of a vaccine. Since critiques of animal extrapolation are frequently motivated past ethical concerns about experimentation on animals capable of suffering (cf. [LaFollette and Shanks, 1996]), they primarily business brute research regarding less fundamental mechanisms that cannot be studied in such simpler organisms as nematode worms or slime molds. Moreover, noting that the appropriateness of an animal model for a particular extrapolation is an empirical hypothesis does not explain how such a hypothesis tin can be established without already knowing what one wishes to extrapolate.

The most sustained methodological critique of beast extrapolation is devel-oped in a book and series of articles by Hugh LaFollette and Niall Shanks [1993a; 1993b; 1995; 1996]. They use the term causal analogue model (CAM) to refer to models that can ground extrapolation and hypothetical analogue model (HAM) to refer to those that office simply every bit sources of new hypotheses to be tested by clinical studies. According to LaFollette and Shanks, animal models tin can exist HAMs but not CAMs. A similar, though peradventure more moderate thesis is advanced past Marcel Weber, who maintains that, except for studies of highly conserved mecha-nisms, fauna models primarily support but "preparative experimentation" and not extrapolation [2005, pp. 185-6]. Weber'southward "preparative experimentation" is

similar to LaFollette and Shanks' notion of a HAM, except that it emphasizes the useful research materials and procedures derived from the animal model in addition to hypotheses [2005, pp. 174-vi, 182-3].

LaFollette and Shanks' primary statement for the conclusion that model organ-isms can part but equally HAMs and not equally CAMs rests on the proposition that if a model is a CAM, then "there must be no causally relevant disanalogies between the model and the affair being modeled" [1995, p. 147; italics in original]. It is not difficult to show that animal models rarely if ever meet this stringent requirement. A 2nd argument advanced by LaFollette and Shanks rests on the plausible claim that the ceremoniousness of a model organism for extrapolation must be demonstrated by empirical evidence [1993a, p. 120]. LaFollette and Shanks fence that this appropriateness cannot be established without already knowing what one hopes to acquire from the extrapolation.

We have reason to believe that they [animal model and human] are causally similar merely to the extent that nosotros accept detailed noesis of the status in both humans and animals. Still, once we have enough data to be confident that the non-human animals are causally like (and thus, that inferences from ane to the other are likely), we probable know most of what the CAM is supposed to reveal [1995, p. 157].

LaFollette and Shanks presumably mean to refer to their strict CAM criterion when they write "causally similar," merely the above argument tin can be stated independently of that criterion. Whatever the criterion of a skilful model, the problem is to show that the model satisfies that criterion without already knowing what nosotros hoped to larn from the extrapolation.

Those who are more optimistic about the potential for animal extrapolation to generate informative conclusions near humans are not likely to exist persuaded by these arguments. Well-nigh evidently, LaFollette and Shanks' criterion for a CAM is so stringent that information technology is doubtful that it is could even exist satisfied past two human being populations. Yet, LaFollette and Shanks' arguments are valuable in that they focus attending on two challenges that any adequate positive account of extrapolation must address. First, such an account must explicate how it can be possible to extrapolate fifty-fifty when some causally relevant disanalogies are present. Secondly, an account must be given of how the suitability of the model for extrapolation can be established without already knowing what one hoped to extrapolate.

One intuitively appealing proposition is that knowledge of the mechanisms underlying the cause and issue relationship tin can help to guide extrapolation. For case, imagine two machines A and B. Suppose that a specific input-output relationship in machine A has been discovered by experiment, and the question is whether the same causal relationship is also true of machine B. But unfortunately, it is not possible to perform the same experiment on B to answer this question. Suppose, however, that it is possible to examine the mechanisms of the 2 ma-chines — if these mechanisms were similar, then that would support extrapolating the causal human relationship from ane machine to the other. Thus, the mechanisms approach to extrapolation suggests that knowledge of mechanisms and factors capable of interfering with them can provide a basis for extrapolation. This idea is second nature amidst molecular biologists, and some authors concerned with the role of mechanisms in science have suggested it in passing (cf. [Wimsatt, 1976, p. 691]). Although appealing, the mechanisms proposal stands in need of further elaboration earlier it can answer the two challenges described above. Showtime, since there inevitably will exist some causally relevant differences between the mechanisms of the model and target, information technology needs to be explained how extrapolation tin be justified even when some relevant differences are present. Secondly, comparing mechanisms would involve examining the mechanism in the target — but if the mechanism can be studied directly in the target, it is non clear why one needs to extrapolate from the model. In other words, it needs to be explained how the suitability of the model as a basis for extrapolation could exist established given only partial noesis of the machinery in the target. Further elaboration of the mechanisms arroyo to extrapolation that addresses these issues can be found in [Steel, 2008].

Read total chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780444517876500064

Fields, Domains, and Individuals

Dean Keith Simonton , in Handbook of Organizational Creativity, 2012

Extrapolation and Interpolation

The foregoing disciplinary hierarchy tin be both extrapolated across the sciences to encompass the arts and humanities, and interpolated to accommodate within-field of study contrasts.

Extrapolation . Some of the criteria used to differentiate the diverse sciences besides use to the humanities (Simonton, 2004b, 2009c). Let me give simply two examples.

ane.

The knowledge obsolescence rate is slower for the humanistic domains of history and English language than in the scientific domains of physics, chemistry, biology, psychology, and sociology (McDowell, 1982). History and English scholars tin take a interruption from research—such as going into administration—with much less damage to their after output than is the example for scientists, particularly a natural scientist. In fact, physics is as far to a higher place the domain average in the obsolescence rate equally English language is below average. In concrete terms, a high-energy physicist must work much harder to avoid falling behind his or her field than does a Chaucer scholar. In other words, the domain–individual–field creativity bike in Figure four.one churns much faster in the former discipline than in the latter.

two.

The lecture disfluency displayed in political science, art history, and English exceeds that exhibited in sociology and psychology (Schachter, Christenfeld, Ravina, & Bilous, 1991). Hence, the concepts that define domains in the humanities tend to be more than imprecise, cryptic, and uncertain relative to those in the social sciences. There is one fascinating exception to this generalization, however. The philosophers evidently put a premium on logical and conceptual precision because they score somewhere between psychologists and chemists on this indicator. In that limited sense, philosophy belongs in the social sciences rather than in the humanities!

Although the extrapolation is more tenuous, the disciplinary bureaucracy may be extended into the arts (Elation, 1935). The extension is more tenuous because many of the criteria would take to be defined in a broader fashion. Yet if we grant this extension, then we would expect that the psychological factors that distinguish the natural from the social sciences would also differentiate the sciences from the arts. As will be shown shortly, this turns out to be the case.

Interpolation. To be more precise, the scientific hierarchy depicted in Figure 4.3 tin only exist considered an approximation. The specific placement should more precisely add fault bars. That is, each domain is placed according to its empirical center of gravity or central tendency, near which individual scientists volition vary in significant ways. The degree of variation is even neat enough to testify overlap between the separate domain distributions. Although a portion of this variation may be attributed to private differences, a more interesting source of the variation is intra-disciplinary in nature. Over again, permit me to requite two illustrations.

1.

Natural-science versus human-science psychology. Co-ordinate to the bureaucracy presented in Figure 4.3, psychology stands somewhere between biological science and sociology. Although information technology leans toward biology, clearly some psychologists are more favorably disposed toward the social sciences, even humanities. Empirical evidence supports this view (Coan, 1968, 1973; Simonton, 2000; see also Kimble, 1984). In particular, a study of eminent psychologists plant that they formed two separable psychological orientations. On the i hand are the natural-scientific discipline oriented psychologists who are objectivistic, quantitative, elementaristic, impersonal, static, and exogenist in their theory and methodology. On the other hand are the human-science oriented psychologists, who are theoretically and methodologically subjectivistic, qualitative, holistic, personal, dynamic, and endogenist. Furthermore, psychologists who represent the extremes on these dimensions tend to receive more citations than those who occupy more compromising or conciliatory positions between the extremes (Simonton, 2000). As a consequence, psychology'due south placement in Figure four.3 actually represents a mean of 2 psychologies. Information technology is worth pointing out the magnitude of domain consensus inspired past Skinner's radical behaviorism (equally indicated by the Journal for the Experimental Analysis of Behavior) compares quite favorably with inquiry published in the difficult sciences (Cole, 1983).

2.

Normal versus revolutionary science. Kuhn (1970) distinguished between sciences that were paradigmatic, such as physics and chemical science, and those that were pre- or non-paradigmatic, such as psychology and sociology. Scientists in the sometime domains operate with a college level of theoretical and methodological consensus. In Kuhnian terminology, such scientists practice "normal" or "puzzle-solving" science. At the aforementioned fourth dimension, Kuhn recognized that paradigmatic sciences oft get through "crises" attributable to the appearance of "anomalies"—findings that do not fit the accepted paradigm. During these periods, the consensus breaks down, and the science becomes, in a sense, less paradigmatic. Happily, revolutionary scientists eventually appear who introduce a new paradigm to replace the sometime one. The discipline then can move up to its original place in the hierarchy. Practitioners will do normal scientific discipline once once again.

Notation that individual practitioners of paradigmatic sciences take a strong expectation that the crisis volition exist temporary. Practitioners in not-paradigmatic sciences do non share that conventionalities. Thus, natural- and human-science oriented psychologists seldom try to reach some unified paradigm for the discipline.

Read total chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780123747143000045

Furnishings on Health and Human Welfare

DANIEL A. VALLERO , in Fundamentals of Air Pollution (Fourth Edition), 2008

1.

 By extrapolation, what will be the concentration of CO 2 in the yr 2050? How does this compare with the concentration in 1980?

2.

What factors influence the accumulation of a chemical in the man trunk?

3.

Draw normal lung function.

4.

Explain why the inhalation route for lead is considered an important hazard when it accounts for only near twenty% of the potential allowable torso burden?

5.

 (a) Explain how CO interacts with the circulatory system, especially the relationship amid CO, CO2, and O2 in blood cells and how exposure to CO influences normal oxygenation mechanisms. (b) Why are individuals with heart illness at greater adventure when exposed to elevated CO levels?

vi.

From Figs. 11.8 and 11.9, grade and defend a hypothesis of the types of particles and gases that may cause or exacerbate asthma.

7.

How is particle deposition and removal from the lung influenced by the size of the particles?

8.

How do exposure time and blazon of population influence the air quality standards established for the community and the workplace?

nine.

Compare the strengths and weaknesses of health effects information obtained from epidemiological, clinical, and toxicological studies.

ten.

Explain the role of valence in metal bioavailability and toxicity. Why is it unreasonable to endeavour to "eliminate" metals?

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/commodity/pii/B9780123736154500121

A Personal Memory

Mary Howes , Geoffrey O'Shea , in Human Memory, 2014

Retrieval Cues

In other cases, extrapolation from list recall does non work well inside the present context. Every bit noted above, recollect of some information from a target episodic memory does not impair the capacity to remember more than information from that memory: on the reverse, call back is typically enhanced. The contrary is truthful for word-listing recall. And every bit volition be explored in later chapters, retrieval cues in fact often function quite differently within these two contexts.

Some other set of issues also emerge from recollections of the kind described higher up. It is clear that recalled memories do not typically reflect the data with the strongest activation level in LTM at that moment in time. At that place is content in my own long-term store that is more strongly coded by far than the memory described above. But in a deliberate endeavor at call back, what might exist called a particular channel for the render of content to awareness is established. This aqueduct is dominated by the operating cues; whatever information in LTM the cues contact that is specific to those cues and that develops an fairly loftier level of activation—that information will be recalled.

Tulving'due south lifetime work has demonstrated the following. Given the power of cues to strengthen a memory with weak background activation, the extent to which the cues overlap with target memory content determines the probability of that target beingness recalled (Eysenck & Keane, 2010; Tulving, 1982, 2002). In contrast, some content with what Tulving calls "loftier quality" will require simply minimal overlap. The issue of high quality is introduced again beneath.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9780124080874000050

Forecasting

Kenneth C. Land , in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

Quantitative Trend Analysis and Extrapolation

Quantitative tendency analysis and extrapolation is a second major approach to forecasting. Amongst other empirical applications, quantitative tendency analysis and extrapolation is a office of the methodology of the technical analysis of stock market trends. In technical analysis, the developmental characteristics (e.grand., gaps, reversals, trend lines of highs and lows, and trend channels) of prices of corporate stock equities traded in stock substitution markets, using both visual and quantitative properties, have been developed and applied for several decades (Edwards et al., 2013). There also is a tradition of application of logistic and other S-curve models to time trends of data in order to extrapolate/forecast population growth, life expectancy, the spread of a new disease such every bit AIDS, energy prices, the diffusion of innovations and new products/devices, and so forth (encounter, e.g., Modis, 1992).

In the social sciences more generally, population projections are a prototypical empirical awarding of quantitative tendency assay and extrapolation. For example, the U.S. Demography Agency periodically produces projections of the U.s. resident population past age, sex, race, and Hispanic origin. These projections are produced using a cohort-component method that decomposes changes in the age–sex–race/ethnicity counts of the population obtained from a decennial census year into iii components of change from year to year: births into the population at age zero, age-specific deaths to the population, and age-specific net international migrants (immigrants into the population minus emigrants from the population) into the population (U.S. Bureau of the Census, 2012). Application of this method to the production of almanac age–sexual activity–race/indigenous-specific population projections from a base of operations twelvemonth thus requires assumptions about yearly changes in these three demographic components of alter (future births, deaths, and net international migration).

As a specific instance, the Census Bureau National Projections for the years 2012–60 were produced using a accomplice-component method kickoff with an estimated base population for July 1, 2011 as follows (U.South. Bureau of the Census, 2012). Kickoff, components of population change (mortality, fertility, and net international migration) were projected. Next, for each passing year, the population was advanced one   yr of historic period and the new age categories were updated using the projected survival rates and levels of internet international migration for that year. A new birth accomplice was so added to form the population under 1   year of historic period past applying projected age-specific fertility rates to the average female population aged x–54   years and updating the new cohort for the effects of mortality and net international migration. The assumptions for the components of alter were based on time-series analysis of historical trends as follows.

Age-specific fertility rates were calculated and projected for women aged 10–54   years from nascency registration information for 1989–2009, which were compiled by the U.S. National Center for Health Statistics (NCHS). The nascence registration data were used in conjunction with the Census Bureau's Intercensal Estimates to produce a serial of age-specific fertility rates by female parent'due south race and Hispanic origin for v race- and Hispanic-origin groups: (one) non-Hispanic white, (2) non-Hispanic black, (3) non-Hispanic American Indian or Alaska Native (AIAN), (4) not-Hispanic Asian or Pacific Islander (API), and (5) Hispanic (of any race). Race and Hispanic origin was assigned to projected births based on the race of the female parent, the racial composition of men in the projected population, and the 2010 Census distribution of race and ethnicity of women and men with children less than xviii years of age in the household. Sexual activity was assigned to projected births inside each race- and Hispanic-origin group. The sex ratios (males per 100 females) of hereafter births were set to equal the average of the sex ratios of births for the catamenia from 1989 to 2009, within each of the five race- and Hispanic-origin groups. The historic period-specific fertility rates so were projected to 2060 by assuming convergence past 2100 of the age-specific fertility rates of all v race- and Hispanic-origin groups to the average age-specific fertility rates of the not-Hispanic white group for the years 1989–2009 (ane.83 births per woman).

Just ane series of mortality rates was projected for the 2012 National Projections. Mortality rates were calculated from NCHS-compiled expiry registration information for 1989 to 2009. These rates were used in conjunction with the Population Estimates Program'south Intercensal Estimates to produce a series of bloodshed rates by age and sex for three race- and Hispanic-origin groupings. Mortality was projected based on projections of the life expectancy at nascence (due east 0) by sexual activity. Changes in life expectancy at birth by sexual activity were modeled assuming that the complement of the life expectancy (deviation between an upper bound value, A, and life expectancy values) would decline exponentially:

[1] C ( t ) = A e 0 ( t )

where C(t)   =   the observed complement of life expectancy at birth at time t, A  =   the upper asymptote of life expectancy, and e 0(t)   =   the life expectancy at birth at fourth dimension t. The complement of life expectancy was then projected for future dates as:

[2] C ˆ ( t ) = C ˆ ( t 0 ) e r ( t t 0 )

where C ˆ ( t )   =   the observed complement of life expectancy at birth at time t, r  =   the rate of change in the complement of life expectancy at birth, and C ˆ ( t 0 )   =   the model complement of life expectancy at fourth dimension t 0. The parameters r, C ˆ ( t 0 ) , and A were estimated simultaneously by minimizing the sum of squared errors (SSE) betwixt the model and the observed values of life expectancy, by sex, for the years 1999 through 2009. It was causeless that the complement of life expectancy for each of the iii race- and Hispanic-origin groups would modify at the same rate as for the total country for each sex.

Projected values for the complement of life expectancy for each group for selected years from 2010 through 2060 were produced past assuming that the charge per unit of change in the complement of e0 is the same for each subpopulation every bit information technology is for the full state. Mortality rates by age were then produced using the most recent observed rates past sex- and race-origin group, the trajectory of life expectancy values, and an ultimate life table. To get an ultimate age pattern of bloodshed by sex, the United nations' single age versions of the extended Coale and Demeny model life tables were used (United nations, 2010). The West model mortality rates with life expectancy values of 87 for males and 91 for females were selected. Using the Coale–Demeny Westward model, age-specific key expiry rates were projected for each of the three race-origin groups past sex using a Demography Bureau algorithm, which creates life tables for years that have intermediate life expectancy estimates by finding the interpolation factors for the about recent and side by side expiry rate inputs that would result in the desired life expectancy at nativity value (Arriaga and Associates, 2003). The interpolation is done on the logarithms of the death rate values.

The 2012 National Projections include a Middle/Master series and three alternative serial. These four series of projections provide results for differing assumptions for internet international migration. The alternative series were based on assumptions of Low, Loftier, and Constant levels of net international migration. The Constant serial was produced by holding the level of net international migration from the Middle series for 2012 constant from 2012 to 2060. The High and Low serial were produced by varying the level of net international migration of the foreign-born from the projection used in the Middle series, ±30%, respectively. All other methodology and assumptions used in the Low, High, and Constant series are the same as those used in the Middle series. The three alternative serial are useful for analyzing potential outcomes of different levels of net international migration relative to the Eye series.

Table 1 presents the resulting projections of the resident population for 2012 through 2060 for the Middle serial and the three culling series based on Abiding, Loftier, and Low projections of international migration. In the Middle serial, the population is projected to increase from 314   1000000 in 2012 to 420   meg in 2060. The Constant Migration series lowers the 2060 projected count to 393   million, the Loftier Migration series raises this to 442   one thousand thousand, and the Depression Migration series yields 398   meg.

Table 1. Projections of population for the United States, 2015–60

Year Middle series population estimates Constant net international migration estimates Loftier international migration serial Low international migration series
2015 321   363 321   219 321   595 321   130
2016 323   849 323   606 324   200 323   497
2017 326   348 325   979 326   844 325   851
2018 328   857 328   335 329   524 328   191
2019 331   375 330   671 332   238 330   511
2020 333   896 332   981 334   983 332   808
2021 336   416 335   260 337   754 335   077
2022 338   930 337   504 340   548 337   312
2023 341   436 339   707 343   361 339   511
2024 343   929 341   866 346   190 341   669
2025 346   407 343   977 349   032 343   782
2026 348   867 346   036 351   885 345   848
2027 351   304 348   041 354   745 347   863
2028 353   718 349   987 357   611 349 824
2029 356   107 351   875 360   482 351   731
2030 358   471 353   704 363   358 353   584
2031 360   792 355   473 366   194 355   390
2032 363   070 357   185 368   991 357   149
2033 365   307 358   841 371   749 358   864
2034 367   503 360   443 374   471 360   536
2035 369   662 361   992 377   158 362   166
2036 371   788 363   494 379   816 363   759
2037 373   883 364   950 382   447 365   318
2038 375   950 366   365 385   054 366   846
2039 377   993 367   741 387   640 368   346
2040 380   016 369   081 390   210 369   821
2041 382   021 370   390 392   766 371   274
2042 384   012 371   670 395   313 372   710
2043 385   992 372   925 397   853 374   129
2044 387   965 374   158 400   391 375   538
2045 389   934 375   374 402   929 376   939
2046 391   902 376   574 405   471 378   333
2047 393   869 377   759 408   016 379   722
2048 395   841 378   934 410   570 381   110
2049 397   818 380   101 413   136 382   500
2050 399   803 381   262 415   714 383   892
2051 401   796 382   416 418   305 385   287
2052 403   798 383   566 420   910 386   686
2053 405   811 384   712 423   531 388   091
2054 407   835 385   857 426   168 389   502
2055 409   873 387   001 428   823 390   922
2056 411   923 388   146 431   497 392   350
2057 413   989 389   293 434   189 393   788
2058 416   068 390   442 436   900 395   236
2059 418   161 391   593 439   629 396   694
2060 420   268 392   746 442   374 398   160

Source: U.Southward. Census Bureau, Population Division, Release Engagement: December 2012.

The 2012 Census Bureau National Population projections have been described and presented in some detail in club to illustrate an of import application of quantitative tendency forecasting in the social sciences. In addition to the National Population projections, the Census Bureau produces subnational projections for each of the US states. The cohort-component method recently has been generalized and extended to the project of both national and regional populations past household structures/composition (Zeng et al., 2013, 2014). Key features of these demographic and other applications of quantitative tendency analysis and projections are the application of relatively unproblematic statistical models (e.g., minimum least-squares estimates of fourth dimension trends) and the presence of an accounting identity that coordinates the diverse component projected series combined with considerable expert substantive knowledge of the bailiwick matter of the forecast.

Read full affiliate

URL:

https://www.sciencedirect.com/science/article/pii/B9780080970868105446

External Validity

G.E. Matt , ... M. Sklar , in International Encyclopedia of Educational activity (3rd Edition), 2010

The Principle of Empirical Interpolation and Extrapolation

The principle of empirical interpolation and extrapolation addresses the issue of generalizing to an unsampled range of values on a particular variable ( Melt, 1993). For case, if the effect was observed in second and fifth graders, empirically bound generalization based on interpolation would infer that the event would also be observed at unstudied levels in-between (i.east., 3rd and 4th graders). Interpolation relies on the assumption that the human relationship between cause and effect is known (due east.yard., linear) between the studied levels of a variable. In contrast, extrapolation involves generalizing beyond the range of sampled values in either management (e.k., generalizing findings from second and third graders to get-go and/or fourth graders). Co-ordinate to Cook (1993), there is greater confidence in generalization based on interpolation than on extrapolation, and there is stronger justification for either when (1) a greater range of values is studied and (2) the effect is consistent across this range. Further, shorter extrapolations are less problematic than larger inferential leaps. An advantage of meta-assay is that it encompasses a wider range of person and/or treatment variable values than is typically observed in whatever single enquiry study (Cook, 1993). Therefore, interpolation is much more common in meta-assay than is the riskier practise of extrapolation.

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780080448947017000

Perceptual Learning

Felice Bedford , in Psychology of Learning and Motivation, 1993

ii Centrality

Nosotros investigated the conditions under which extrapolation may exist more likely to occur. Thus far, extrapolation would crave generalizing from 2 points straddling a cardinal region (e.thousand., 7° to the left of straight alee and 7° to the correct) to a non-central region (east.m., 7° to 25°). Yet many systems are known to be linear only over a central range. Perhaps this noesis has been internalized in some sense, then that a linear function in the middle of a spatial continuum would not be generalized to a not-central region in the absenteeism of solid proof. It may exist easier to go in the other direction.

The two preparation pairs were "moved" to the side, such that both visual locations were to the left of direct ahead. Effigy 7B shows the blueprint of generalization. The primary effect is that extrapolation along the interpolated line does not occur to key regions either. This effect occurred for two different sets of preparation pairs, one with 5° offsets and the other with ten° offsets.

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/S0079742108602935