Students / Subjects



Forgot password?



A B C D E F G H I J K L M N O P Q R S T U V W X Y Z (Show all)


Cumulative Average Growth Rate

Source: econterms

calculus of voting

A model of political voting behavior in which a citizen chooses to vote if the costs of doing so are outweighed by the strength of the citizen's preference for one candidate weighted by the anticipated probability that the citizen's vote will be decisive in the election.

Source: econterms


1. The estimation of some parameters of a model, under the assumption that the model is correct, as a middle step in the study of other parameters. Use of this word suggests that the investigator wishes to give those other parameters of the model a 'fair chance' to describe the data, not to get stuck in a side discussion about whether the calibrated parameters are ideally modeled or estimated.

2. Taking parameters that have been estimated for a similar model into one's own model, solving one's own model numerically, and simulating. Attributed to Edward Prescott.

Source: econterms

call option

A call option conveys the right to buy a specified quantity of an underlying security.

Source: econterms


Something owned which provides ongoing services. In the national accounts, or to firms, capital is made up of durable investment goods, normally summed in units of money. Broadly: land plus physical structures plus equipment. The idea is used in models and in the national accounts.

See also human capital and social capital.

Source: econterms

capital consumption

In national accounts, this is the amount by which gross investment exceeds net investment. It is the same as replacement investment.
-- Oulton (2002, p. 13)

Source: econterms

capital deepening

Increase in capital intensity, normally in a macro context where it is measured by something analogous to the capital stock available per labor hour spent. In a micro context, it could mean the amount of capital available for a worker to use, but this use is rare.

Capital deepening is a macroeconomic concept, of a faster-growing magnitude of capital in production than in labor. Industrialization involved capital deepening - that is, more and more expensive equipment with a lesser corresponding rise in wage expenses.

Capital deepening of a certain input (e.g. a certain kind of capital input, a recent key example being computer equipment) can be measured in the following way. Estimate the growth of the services provided by this input, per unit of labor input, in year T and in year T+1. The growth rate of that ratio is one common measure of the rate of capital deepening. Oulton, p. 31

Source: econterms

capital intensity

Amount of capital per unit of labor input.

Source: econterms

capital ratio

A measure of a bank's capital strength used by U.S. regulatory agencies.

Source: econterms

capital structure

The capital structure of a firm is broadly made up of its amounts of equity and debt.

Source: econterms


One of the ways in which an effectiveness variable could be included in a production function in a Solow model. If effectiveness A is multiplied by capital K but not by labor L, then we say the effectiveness variable is capital-augmenting.
For example, in the model of output Y where Y=(AK)aL1-a the effectiveness variable A is capital-augmenting but in the model Y=AKaL1-a it is not.
Another example would be a capital utilization variable as measured say by electricity usage. (E.g., as in Eichenbaum). ----------------- An example: in the context of a railroad, automatic railroad signaling, track-switching, and car-coupling devices are capital-augmenting. From Moses Abramovitz and Paul A. David, 1996. 'Convergence and Deferred Catch-up: productivity leadership and the waning of American exceptionalism.' In Mosaic of Economic Growth, edited by Ralph Landau, Timothy Taylor, and Gavin Wright.

Source: econterms


The system of payment for each customer served, rather than by service performed. Both are used in various ways in U.S. medical care.

Source: econterms


Capital Asset Pricing Model

Source: econterms


stands for Cumulative Average Return.

A portfolio's abnormal return (AR) at each time is ARt=Sum from i=1 to N of each arit/N. Here arit is the abnormal return at time t of security i.

Over a window from t=1 to T, the CAR is the sum of all the ARs.

Source: econterms

CARA utility

A class of utility functions. Also called exponential utility. Has the form, for some positive constant a:
"Under this specification the elasticity of marginal utility is equal to -ac, and the instantaneous elasticity of substitution is equal to 1/ac."
The coefficient of absolute risk aversion is a; thus the abbreviation CARA for Constant Absolute Risk Aversion. "Constant absolute risk aversion is usually thought of as a less plausible description of risk aversion than constant relative risk aversion" (that's the CRRA, which see), but it can be more analytically convenient.

Source: econterms


cumulative average adjusted returns

Source: econterms

cash-in-advance constraint

A modeling idea. In a basic Arrow-Debreu general equilibrium there is no need for money because exchanges are automatic, through a Walrasian auctioneer. To study monetary phenomena, a class of models was made in which money was required to make purchases of other goods. In such a model the budget constraint is written so that the agent must have enough cash on hand to make any consumption purchase. Using this mechanism money can have a positive price in equilibrium and monetary effects can be seen in such models. Contrast money-in-the-utility function for an alternative modeling approach.

Source: econterms


''Catch-up' refers to the long-run process by which productivity laggards close the proportional gaps that separate them from the productivity leader .... 'Convergence,' in our usage, refers to a reduction of a measure of dispersion in the relative productivity levels of the array of countries under examination.' Like Barro and Sala-i-Martin (92)'s 'sigma-convergence', a narrowing of the dispersion of country productivity levels over time.

Source: econterms

Category split effect

Research on frequency estimation has shown that several factors can influence the subjective frequency of events. One of these factors is the category width. Splitting an event category into smaller subcategories can increase the subjective frequency of events: A total set of events may have less impact, or appear less frequent, subjectively, than the sum of its (exclusive) subsets. For example, imagine you are asked to judge the number of Japanese cars in your own country, or, in another condition, to judge the frequency of Honda, Nissan, Toyota, Mazda, Daihatsu and Mitshubishi cars. The sum of the judged component frequencies from the split-category condition will be higher, under many circumstances, than the compound frequency of the entire category.

Source: SFB 504

Cauchy distribution

Has thicker tails than a normal distribution.
density function (pdf): f(x) = 1/[pi*(1+x2)]. distribution function (cdf): F(x) = .5 + (tan-1x)/pi.

Source: econterms

Cauchy sequence

A sequence satisfies the Cauchy criterion iff for each positive real epsilon there exists a natural number N such that the distance between any two elements of the sequence past the Nth element is less than epsilon. 'Distance' must be defined in context by the user of the term.

One sometimes hears the construction: 'The sequence is Cauchy' if the sequence satisfies the definition.

Source: econterms


Stands for Consumption-based Capital Asset Pricing Model.
A theory of asset prices. Formulated in Lucas, 1978, and Breeden, 1979.

Source: econterms


Stands for Corporate Data Exchange, an organization which has data on the shareholdings of large U.S. companies.

Source: econterms


cumulative distribution function. This function describes a statistical distribution. It has the value, at each possible outcome, of the probability of receiving that outcome or a lower one. A cdf is usually denoted in capital letters. Consider for example some F(x), with x a real number is the probability of receiving a draw less than or equal to x. A particular form of F(x) will describe the normal distribution, or any other unidimensional distribution.

Source: econterms


Stands for Concavity of distribution function condition.

Source: econterms

censored dependent variable

A dependent variable in a model is censored if observations of it cannot be seen when it takes on vales in some range. That is, the independent variables are observed for such observations but the dependent variable is not.

A natural example is that if we have data on consumers and prices paid for cars, if a consumer's willingness-to-pay for a car is negative, we will see observations with consumer information but no car price, no matter how low car prices go in the data. Price observations are then censored at zero.

Contrast truncated dependent variables.

Source: econterms

central bank

A government bank; a bank for banks.

Source: econterms

Centrality of typicality

Items with greater family resemblance to a category are judged to be more typical of the category.

Source: SFB 504

Certainty effect

The reduction of the probability of an outcome by a constant factor has more impact when the outcome was initially certain than when it was merely probable (e.g. Allais paradox).

Source: SFB 504

certainty equivalence principle

Imagine that a stochastic objective function is a function only of output and output-squared. Then the solution to the optimization problem of choosing output will have the special characteristic that only the conditional means of the future forcing variables appear in the first order conditions. (By conditional means is meant the set of means for each state of the world.) Then the solution has the "certainty equivalence" property. "That is, the problem can be separated into two stages: first, get minimum mean squared error forecasts of the exogenous [variables], which are the conditional expectations...; second, at time t, solve the nonstochastic optimization problem," using the mean in place of the random variable. "This separation of forecasting from optimization.... is computationally very convenient and explains why quadratic objective functions are assumed in much applied work. For general [functions] the certainty equivalence principle does not hold, so that the forecasting and opt problems do not 'separate.'"

Source: econterms

certainty equivalent

The amount of payoff (e.g. money or utility) that an agent would have to receive to be indifferent between that payoff and a given gamble is called that gamble's 'certainty equivalent'. For a risk averse agent (as most are assumed to be) the certainty equivalent is less than the expected value of the gamble because the agent prefers to reduce uncertainty.

Source: econterms

CES production function

CES stands for constant elasticity of substitution. This is a function describing production, usually at a macroeconomic level, with two inputs which are usually capital and labor. As defined by Arrow, Chenery, Minhas, and Solow, 1961 (p. 230), it is written this way:

V = (bK-r + aL-r) -(1/r)

where V = value-added, (though y for output is more common),
K is a measure of capital input,
L is a measure of labor input,
and the Greek letters are constants. Normally a>0 and b>0 and r>-1. For more details see the source article.

In this function the elasticity of substitution between capital and labor is constant for any value of K and L. It is (1+r)-1.

Source: econterms

CES technology

Example, adapted from Caselli and Ventura:
For capital k, labor input n, and constant b<?? (?less that what?)
f(k,n) = (kb + nb)1/b
Here the elasticity of substitution between capital and labor is less than one, i.e. 1/(1-b)<1.

Source: econterms

CES utility

Stands for Constant Elasticity of Substitution, a kind of utility function. A synonym for CRRA or isoelastic utility function. Often written this way, presuming a constant g not equal to one:
This limits to u(c)=ln(c) as g goes to one.
The elasticity of substitution between consumption at any two points in time is constant, equal to 1/g. "The elasticity of marginal utility is equal to" -g. g can also be said to be the coefficient of relative risk aversion, defined as -u"(c)c/u'(c), which is why this function is also called the CRRA (constant relative risk aversion) utility function.

Source: econterms

ceteris paribus

means "assuming all else is held constant". The author is attempting to distinguish an effect of one kind of change from any others.

Source: econterms

Ceteris Paribus

A Latin term meaning â??all else held constantâ?? or â??all else remains the same.â?? In economics, in order to study the effect of a change in one variable, we often employ the ceteris paribus assumption: to isolate the effect of the changing variable we hold everything else constant. For example, in order to study the effect of an increase in income on the equilibrium price and quantity of a good we have to assume that everything else is held constant (tastes and preferences, the price of other goods, etc.).

Source: EconPort


Abbreviation for the U.S. government's Consumer Expenditure Survey

Source: econterms


The U.S. government's Commodities and Futures Trading Commission.

Source: econterms


An occasional abbreviation for 'computable general equilibrium' models.

Source: econterms


Describes an index number that is frequently reweighted. An example is an inflation index made up of prices weighted by frequency with which they are paid, and frequent recomputation of weights makes it a chained inded.

Source: econterms


A description of a dynamic system that is very sensitive to initial conditions and may evolve in wildly different ways from slightly different initial conditions.

Source: econterms

characteristic equation

polynomial whose roots are eigenvalues

Source: econterms

characteristic function

Denoted here PSI(t) or PSIX(t). Is defined for any random variable X with a pdf f(x). PSI(t) is defined to be E[eitX], which is the integral from minus infinity to infinity of eitXf(x). This is also the cgf, or cumulant generating function. "Every distribution has a unique characteristic function; and to each characteristic function there corresponds a unique distribution of probability." -- Hogg and Craig, p 64

Source: econterms

characteristic root

Synonym for eigenvalue.

Source: econterms


or "state theory of money" -- 19th century monetary theory, based more on the idea that legal restrictions or customs can or should maintain the value of money, not intrinsic content of valuable metal.

Source: econterms

chi-square distribution

A continuous distribution, with natural number parameter r. Is the distribution of sums of squares of r standard normal variables. Mean is r, variance is 2r, pdf and cdf is difficult to express in html, and moment-generating function (mgf) is (1-2t)-r/2.

From older definition in this same database: If n random values z1, z2, ..., zn are drawn from a standard normal distribution, squared, and summed, the resulting statistic is said to have a chi-squared distribution with n degrees of freedom: z12 + z22 + ... + zn2) ~ X2(n) This is a one-parameter family of distributions, and the parameter, n, is conventionally labeled the degrees of freedom of the distribution. -- quoted and paraphrased from Johnston See also noncentral chi-squared distribution

Source: econterms

Chicago School

Refers to an perspective on economics of the University of Chicago circa 1970. Variously interpreted to imply:
1) A preference for models in which information is perfect, and an associated search for empirical evidence that choices, not institutional limitations, are what result in outcomes for people. (E.g., that committing crime is a career choice; that smoking represents an informed tradeoff between health risk and immediate gratification.)
2) That antitrust law is rarely necessary, because potential competition will limit monopolist abuses.

Source: econterms

choke price

The lowest price at which the quantity demanded is zero.

Source: econterms

Cholesky decomposition

Given a symmetric positive definite square matrix X, the Cholesky decomposition of X is the factorization X=U'U, where U is the square root matrix of X, and satisfies:
(1) U'U = X
(2) U is upper triangular (that is, it has all zeros below the diagonal) Once U has been computed, one can calculate the inverse of X more easily, because X-1 = U-1(U')-1, and the inverses of U and U' are easier to compute.

Source: econterms

Cholesky factorization

Same as Cholesky decomposition.

Source: econterms

Chow test

A particular test for structural change; an econometric test to determine whether the coefficients in a regression model are the same in separate subsamples. In reference to a paper of G.C. Chow (1960), "the standard F test for the equality of two sets of coefficients in linear regression models" is called a Chow test. See derivation and explanation in Davidson and MacKinnon, p. 375-376. More info in Greene, 2nd edition, p 211-2.

Homoskedasticity of errors is assumed although this can be dubious since we are open to the possibility that the parameter vector (b) has changed.
RSSR = the sum of squared residuals from a linear regression in which b1 and b2 are assumed to be the same
SSR1 = the sum of squared residuals from a linear regression of sample 1
SSR2 = the sum of squared residuals from a linear regression of sample 2
b has dimension k, and there are n observations in total
Then the F statistic is:
((RSSR-SSR1-SSR2)/k ) / ((SSR1+SSR2)/(n-2k).
That test statistic is the Chow test.

Source: econterms

circulating capital

flows of value within a production organization. Includes stocks of raw material, work in process, finished goods inventories, and cash on hand needed to pay workers and suppliers before products are sold.

Source: econterms


An abbreviation for the Canadian Journal of Economics.

Source: econterms


Stands for the "Censored Least Absolute Deviations" estimator. If errors are symmetric (with median of zero), this estimator is unbiased and consistent though not efficient. The errors need not be homoskedastic or normally distributed to have those attributes.

CLAD may have been defined for the first time in Powell, 1984.

Source: econterms


According to Lucas (1998), a classical theory would have no explicit reference to preferences. Contrast neoclassical.

Source: econterms

Classical vs Bayesian methods

Note that the last paragraph is actually a description of classical econometrics. As in statistics, classical (or frequentist) methods concentrate on testing hypotheses that are derived from theory, using the data available. Bayesian econometrics (and statistics), on the other hand, stresses the role of the data itself in both development and testing of economic theories. While most empirical applications of econometrics use classical methods, Bayesian econometrics has gained importance for applied work in recent years, a fact that is partly due to the increased computer power available for computationally intensive applied work. The classical vs. Bayesian controversy extends to other social sciences as well; for an appreciation of its relevance in psychology, see e.g. Gigerenzer (1987).

Source: SFB 504

Clayton Act

A 1914 U.S. law on the subject of antitrust and price discrimination.
Section two prohibits price discrimination.
Section three prohibits sales based on an exclusive dealing contract requirement that may have the effect of lessening competition.
Section seven prohibits mergers where "the effect of such acquisition may be substantially to lessen competition, or tend to create a monopoly" in any line of commerce.

Source: econterms


A verb. A market clears if the vector of prices for goods is such that the excess demand at those prices is zero. That is, the quantity demanded of every good at those prices is met.

Source: econterms


the study of economic history; the 'metrics' at the end was put to emphasize (possibly humorously) the frequent use of regression estimation.

'The cliometric contribution was the application of a systematic body of theory -- neoclassical theory -- to history and the application of sophisticated, quantitative techniques to the specification and testing of historical models.' -- North (1990/1993) p 131.

Source: econterms

clustered data

Data whose observations are not iid but rather come in clusters that are correlated together -- e.g. a data set of individuals some of whom are siblings of others, and are therefore similar demographically.

Source: econterms

Coase theorem

Informally: that in presence of complete competitive markets and the absence of transactions costs, an efficient set of inputs to production and outputs from production will be chosen by agents regardless of how property rights over the inputs were assigned to the agents. A detailed discussion is in the Encyclopedia of Law and Economics, online.

Source: econterms

Cobb-Douglas production function

A standard production function which is applied to describe much output two inputs into a production process make. It is used commonly in both macro and micro examples.

For capital K, labor input L, and constants a, b, and c, the Cobb-Douglas production function is
f(k,n) = bkanc

If a+c=1 this production function has constant returns to scale. (Equivalently, in mathematical language, it would then be linearly homogenous.) This is a standard case and one often writes (1-a) in place of c.

Log-linearization simplifies the function, meaning just that taking logs of both sides of a Cobb-Douglass function gives one better separation of the components.

In the Cobb-Douglass function the elasticity of substitution between capital and labor is 1 for all values of capital and labor.

Source: econterms

cobweb model

A theoretical model of an adjustment process that on a price/quantity or supply/demand graph spirals toward equilibrium.

Example, from Ehrenberg and Smith: Suppose the equilibrium labor market wage for engineers is stable over a ten-year period, but at the beginning of that period the wage is above equilibrium for some reason. Operating on the assumption, let's say, that engineering wages will remain that high, too many students then go into engineering. The wage falls suddenly from oversupply when that population graduates. Too few students then choose engineering. Then there is a shortage following their graduation. Adjustment to equilibrium could be slow.

"Critical to cobweb models is the assumption that workers form myopic expectations about the future behavior of wages." "Also critical to cobweb models is that the demand curve be flatter than the supply curve; if it is not, the cobweb 'explodes' when demand shifts and an equilibrium wage is never reached."

Source: econterms

Cochrane-Orcutt estimation

An algorithm for estimating a time series linear regression in the presence of autocorrelated errors. The implicit citation is to Cochrane-Orcutt (1949).

The procedure is nicely explained in the SHAZAM manual section online at the SHAZAM web site. Their procedure includes an improvement to include the first observation attributed to the Prais-Winsten transformation. A summary of their excellent description is below. This version of the algorithm can handle only first-order autocorrelation but the Cochrane-Orcutt method could handle more.

Suppose we wish to regress y[t] on X[t] in the presence of autocorrelated errors. Run an OLS regression of y on X and construct a series of residuals e[t]. Regress e[t] on e[t-1] to estimate the autocorrelation coefficient, denoted p here. Then construct series y* and X* by: y*1 = sqrt(1-p2)y1,
X*1 = sqrt(1-p2)X1,

y*t = yt - pyt-1,
X*t = Xt - pXt-1

One estimates b in y=bX+u by applying this procedure iteratively -- renaming y* to y and X* to X at each step, until estimates of p have converged satisfactorily.

Using the final estimate of p, one can construct an estimate of the covariance matrix of the errors, and apply GLS to get an efficient estimate of b.

Transformed residuals, the covariance matrix of the estimate of b, R2, and so forth can be calculated; see source.

Source: econterms

coefficient of determination

Same as R-squared.

Source: econterms

coefficient of variation

An attribute of a distribution: its standard deviation divided by its mean.

Example: In a series of wage distributions over time, the standard deviation may rise over time with inflation, but the coefficient of variation may not, and thus the fundamental inequality may not.

Source: econterms


Cognition is a common label for processes and structures which have to do with perception, recognition, recall, imagination, concept, thought, but also supposition, expectation, plan and problem solving. It should be distinguished between cognition as process and cognition as the result of this process (see Dorsch Psychologisches Wörterbuch, 1994).

Source: SFB 504

Cognitive dissonance theory

The cognitive dissonance theory (Festinger, 1957) is a general theoretical framework which explains how people change their opinions or hypotheses about themselves and their environment. An important application of cognitive dissonance theory is research on attitude change.

The basic assumption of cognitive dissonance theory is that people are motivated to reduce inconsistent cognitions. Cognition refers to any kind of knowledge or opinion about oneself or the world.

Two cognitions can be either relevant or irrelevant. If they are relevant, then they must be consonant or dissonant (i.e. that one does not follow from the other). Dissonant cognitions produce an aversive state which the individual will try to reduce by changing one or both of the cognitions. If, for example, a heavy smoker is exposed to statistics showing that smoking leads to lung cancer, he or she can change the cognition about how much he smokes ("I´m really only a light smoker.") or perceive the statistical data as hysterical environmentalist propaganda and discount it.

Cognitive dissonance can be reduced by adding new cognitions, if (a) the new cognitions add weight to one side and thus, decreases the proportion of cognitive elements that are dissonant or (b) the new cognitions change the importance of the cognitive elements that are in dissonant relation with one another. The other way to reduce cognitive dissonance is to change existing cognitions. Changing existing cognitions reduces dissonance if (a) the new content makes them less contradictory to others or (b) their importance is reduced.

If new cognitions cannot be added or the existing ones changed, behaviors that have cognitive consequences favoring consonance will be recruited. Seeking new information is an example of such behavior.

Source: SFB 504


A sub-population going through some specified stage in a process. The term is often applied to describe a population of persons going through some life stage, like a first year in a new school.

Source: econterms


"An (n x 1) vector time series yt is said to be cointegrated if each of the series taken individually is ... nonstationary with a unit root, while some linear combination of the series a'y is stationary ... for some nonzero (n x 1) vector a."
Hamilton uses the phrasing that yt is cointegrated with a', and offers a couple of examples. One was that although consumption and income time series have unit roots, consumption tends to be a roughly constant proportion of income over the long term, so (ln income) minus (ln consumption) looks stationary.

Source: econterms


A collar consists of holding an underlying asset and simultaneously buying a put option (long put) and selling a call option (short call) of this underlying asset. Because of the long put, the collar hedges against losses of the underlying asset. But the short call limits the possibility of participation on the gains of the underlying asset.

Source: SFB 504

commercial paper

commoditized short-term corporate debt.

Source: econterms

Common value auction

Instead of having statstically independent information, the bidders' typically obtain private signals about an unknown common value of the resource in sale which are correlated with the underlying (unknown) common value, and correlated with one another. For example, prior to auctions of oil drilling licenses, the bidding companies obtain extensive seismic information on the likely quantity of oil hidden in the earth (or sea). In order to prepare profitable bids, the bidders then have to estimate the likely information obtained by rivalling bidders. In particular, the equilibrium bids must incoporate the fact that given a bidder wins the auction, all rivalling bids will have been lower, and thus the (unknown) common value on average will assessed to be lower than it would have been estimated without having won the auction. In this sense, winning the auction is 'bad news' that must be anticipated and incorporated into the bids, in order to avoid falling prey to a so-called winner's curse.

Source: SFB 504


A set is compact if it is closed and bounded.

The concept comes up most often in economics in the context of a theory in which a function must be maximized. Continuous functions that are well defined on a compact domain have a maximum and minimum; this is the Weierstrauss Theorem. Noncontinuous functions, or functions on a noncompact domain, may not.

Source: econterms

comparative advantage

To illustrate the concept of comparative advantage requires at least two goods and at least two places where each good could be produced with scarce resources in each place. The example drawn here is from Ehrenberg and Smith (1997), page 136. Suppose the two goods are food and clothing, and that 'the price of food within the United States is 0.50 units of clothing and the price of clothing is 2 units of food. [Suppose also that] the price of food in China is 1.67 units of clothing and the price of clothing is 0.60 units of food.' Then we can say that 'the United States has a comparative advantage in producing food and China has a comparative advantage in producing clothing. It follows that in a trading relationship the U.S. should allocate at least some of its scarce resources to producing food and China should allocate at least some of its scarce resources to producing clothing, because this is the most efficient allocation of the scarce resources and allows the price of food and clothing to be as low as possible.

Famous economist David Ricardo illustrated this in the 1800s using wool in Britain and wine from Portugal as examples. The comparative advantage concept seems to be one of the really challenging, novel, and useful abstractions in economics.

Source: econterms

compensating variation

The price a consumer would need to be paid, or the price the consumer would need to pay, to be just as well off after (a) a change in prices of products the consumer might buy, and (b) time to adapt to that change. It is assumed the consumer does not benefit or lose from producing the product.

Source: econterms

Competitive market equilibrium

Competitive, or Walrasian, market equilibrium is the traditional concept of economic equilibrium, appropriate for the analysis of commodity markets with flexible prices and many traders, and serving as the benchmark of efficiency in economic analysis. It relies crucially on the assumption of a competitive environment where buyers and sellers take the terms of trade (prices) as a given parameter of the exchange environment. Basically, each trader decides upon a quantity that is such small compared to the total quantity traded in the market that their individual transactions have no influence on the prices.

A Walrasian or competitive equilibrium consists of a a vector of prices and an allocation such that given the prices, each trader by maximizing his objective function (profit, preferences) subject to his technological possibilities and resource constraints plans to trade into his part of the proposed allocation, and such that the prices make all net trades compatible with one another ('clear the market') by equating aggregate supply and demand for the commodities which are traded.

Although this rather narrow concept of economic equilibrium is inappropriate in many situations, such as oligopolostic market structures, public goods and externalities, collusion, or markets with price rigidities, it hightlights the close connection between unregulated free price formation in competitive markets and allocative efficiency. For a broad variety of preferences, technologies, and ownership structures, competitive equilibria maximize social welfare in the sense of maximizing the sum of aggregate consumer and producer surplus (see economic rents). Not only do Walrasian markets provide an exchange institution that leads to efficient outcomes, but any efficient allocation can be reached as a competitive equilibrium by an appropriate redistribution of the traders' intial resources.

In addition, Walrasian markets minimize the informational requirements to complete a transaction: each trader only has to know the characteristics of the object traded, the price, and his own objective function (preferences, technology). However, complete information on prices and on the characteristic of the commodities is necessary to retain the efficiency features of free price formation in competitive markets. If there is asymmetric information on the quality of the commodities, prices only insufficiently signal the relative opportunity costs of economic decisions, and, as a result, allocative decisions will no longer lead to efficient market outcomes. Even worse, the repercussions of adverse quality updating can make markets break down completely, with no voluntary trade taking place at all. (Potential market breakdowns in the presence of commodities of varying quality and asymmetric information have become famous as the lemons problem.)

If markets are 'thin', traders have market power, and the competitive paradigm does no longer apply. Instead, prices are explained from matching strategically formed price 'bids' (buying demands) and price 'asks' (selling offers). Accordingly, more general models of competitive markets are described as auctions. However, as the number of bidders grows large, the strategic equilibrium bids from common value auctions approach the competitive price. A similar result holds for competitive markets with perfect information where the traders are free to form coalitions which maximize the joint gains from trade. Then, the coalitionally stable outcomes form a large set, which includes in particular the (efficient) competitive allocation. Again, as the number of trader becomes large, the set of outcomes which is stable under collusive behavior shrinks, and it approaches the (unique) competitive outcome again. Thus, in the limit, both the coalitional and the strategic approach to describing competitive markets collapse into the simple competitive (Walrasian) paradigma. This fact both underlines the benchmark role of perfectly competitive market equilibrium for the allocation of goods, and the restrictive nature of the Walrasian concept of competitive markets.

Source: SFB 504

Complementary Goods

See Complements.

Source: EconPort


Goods that are typically consumed together. Examples include guns and bullets, peanut butter and jelly, washers and dryers. If two goods are complements, then an increase in the price of one good, will lead to a decrease in the demand for the other related good (the complement). Similarly, a decrease in the price of one good will lead to an increase in the demand for the complement.

If two goods are complements, the cross-price elasticity of demand is negative.

Source: EconPort
See also: cross-price elasticity of demand , 


(economics theory definition) A model's markets are complete if agents can buy insurance contracts to protect them against any future time and state of the world.

(statistics definition) In a context where a distribution is known except for parameter q, a minimal sufficient statistic is complete if there is only one unbiased estimator of q using that statistic.

Source: econterms

complete market

One in which the complete set of possible gambles on future states-of-the-world can be constructed with existing assets.
This is a theoretical ideal against which reality can be found more or less wanting. It is a common assumption in finance or macro models, where the set of states-of-the-world is formally defined.

Source: econterms


a data set used in finance

Source: econterms

concavity of distribution function condition

A property of a distribution function-utility function pair. (At least, it MAY require specification of the utility function; this editor can't tell well.) It is assumed to hold in some principal-agent models so as to make certain conclusions possible.

Source: econterms

concentration ratio

A way of measuring the concentration of market share held by particular suppliers in a market. "It is the percentage of total market sales accounted for by a given number of leading firms." Thus a four-firm concentration ratio is the total market share of the four firms with the largest market shares. (Sometimes this particular statistic is called the CR4.)

Source: econterms

condition number

A measure of how close a matrix is to being singular. Relevant in estimation if the matrix of regressors is nearly singular the data are nearly collinear and (a) it will be hard to make an accurate or precise inverse, (b) a linear regression will have large standard errors.

The condition number is computed from the characteristic roots or eigenvalues of the matrix. If the largest characteristic root is denoted L and the smallest characteristic root is S (both being presumed to be positive here, that is, the matrix being diagnosed is presumed to be positive definite), then the condition number is:

gamma = (L/S).5

Values larger than 20, according to Greene (93), are observed if and only if the matrix is 'nearly singular'. Greene cites Belsley et al (1980) for this term and the number 20.

Source: econterms


has a special use in finance when used without other modifiers; often means 'conditional on time and previous asset returns'. In that context, one might read 'returns are conditionally normally distributed.'

Source: econterms

conditional factor demands

a collection of functions that give the optimal demands for each of several inputs as a function of the output expected, and the prices of inputs. Often the prices are taken as given, and incorporated into the functions, and so they are only functions of the output.

Usual forms:

x1(w1, w2, y) is a conditional factor demand for input 1, given input prices w1 and w2, and output quantity y

Source: econterms

conditional variance

Shorthand often used in finance to mean, roughly, "variance at time t given that many events up through time t-1 are known."

For example, it has been useful in studying aggregate stock prices, which go through periods of high volatility and periods of low volatility, to model them econometrically as having the variance at time t as coming from an AR process. This is the ARCH idea. In such a statistical model, the conditional variance is generally different from the unconditional variance. That is, the unconditional variance is the variance of the whole process, whereas the 'conditional variance' can be better estimated since in this phrasing it is assumed that we can estimate the immediately previous values of variance.

Source: econterms


A matrix may not have the right dimension or shape to fit into some particular operaton with another matrix. Take matrix addition -- the matrices are supposed to have the same dimensions to be summed. If they don't, we can say that they are not conformable for addition. The most common application of the term comes in the context of multiplication. Multiplying an M x N matrix A by an R x S matrix B directly can only be done if N=R. Otherwise the matrices are not conformable for this purpose. If instead M=R, then the intended operation may be to take the transpose of A and multiply it by B. This operation would properly be denoted A'B, where the prime denotes the transpose of A.

Source: econterms


A firm operating in several industries.

Source: econterms


An estimator for a parameter is consistent iff the estimator converges in probability to the true value of the parameter; that is, the plim of the estimator, as the sample size goes to infinity, is the parameter itself. Another phrasing: an estimator is consistent if it has asymptotic power of one.

"Consistency", without a modifier, is synonymous with weak consistency.

From Davidson and Mackinnon, p. 79: If for any possible value of the parameter q in a region of a parameter space the power of a test goes to one as sample size n goes to infinity, that test is said to be consistent against alternatives in that region of the parameter space. That is, if as the sample size increases we can in the limit reject every false hypothesis about the parameter, the test is consistent.

How does one prove that an estimator is consistent? Here are two ways.
(1) Prove directly that if the model is correct, the estimator has power one in the limit to reject any alternative but the true parameter.
(2) Sufficient conditions for proving that an estimator is consistent are (i) that the estimator is asymptotically unbiased and (ii) that its variance collapses to zero as the sample size goes to infinity. This method of proof is usually easier than (1) and is commonly used.

Source: econterms

constant returns to scale

An attribute of a production function. A production function exhibits constant returns to scale if changing all inputs by a positive proportional factor has the effect of increasing outputs by that factor. This may be true only over some range, in which case one might say that the production function has constant returns over that range.

Source: econterms

Constant Returns To Scale

If a firm exhibits constant returns to scale, when it increases the use of inputs then output increases by the same proportion. For example, if the firm doubles the use of all inputs, then output will also double. With constant returns to scale, long-run average costs are constant.

Source: EconPort

Constantsum games

Games in which for every combination of strategies the sum of players' payoff is the same. For example, auction games for risk neutral bidders and a risk-neutral seller are constant-sum games, where a fixed social surplus from exchange is to be divided between the bidders and the bid-taker. More generally, all exchange situations which do neither allow for production nor for destruction of resources are constant-sum games.

Source: SFB 504

Construct validity

is a type of validity that refers to the degree to which a test captures the underlying construct purportedly measured by the test.

Source: SFB 504

Consumer demand

In the theory of consumer demand, demand functions are derived for commodities by considering a model of rational choice based on utility maximization together with a description of underlying economic constraints. In the theory of consumer demand, these constraints include income (which is treated as given here while it might be endogenous in a more general model of household decisions), and commodity prices, which are also fixed from the perspective of an individual household.

Source: SFB 504

Consumer Expenditure Survey

Conducted by the U.S. government. See its Web site.

Source: econterms


Household behavior can most easily by characterized by the consumption function, both in in macroeconomics as well as in microeconomics. The consumption function explains how much a household consumes as a function of income (and, in some cases, other explanatory variables). Note that consumption is not only expenditures for goods but also consumption of services like living in the own house or using durables.

Keynes (1936) postulates in his General Theory the consumption function as the relationship between consumption to disposable income. In the early 50s the two dominant models of consumption were developed: the permanent income hypothesis and the life-cycle hypothesis. While these models were once viewed as competing, they can now be seen as complementary with differences in emphasis which serve to illuminate different significant problems. Both models emphasize the distinction between (1) consumer expenditures measured by the national income account and (2) consumption which is explained by optimal allocation of present and future recources over time.

The dependence of consumption on current income is described in the Keynesian consumption function while the dependence of consumption on lifetime income is described in the life cycle hypothesis and the permanent income hypothesis. The interest rate influences consumption via saving because of the intertemporal substitution from one period to a future period: Income that is not used for consumption purposes can be saved and consumed one period later, earning an interest payment and hence allowing for more consumption in the future. This increase in the absolute amount available for consumption, as reflected in the interest rate, has then to be compared with the individual´s rate of time preference (the latter expressing her patience with respect to later consumption, or, more generally, to delayed utility derived from consumption). In the optimum, the interest rate and the rate of time preference have to be equal. This is one of the fundamentals of intertemporal choice (as a special form of rational behavior).

Source: SFB 504

consumption beta

"A security's consumption beta is the slope in the regression of its return on per capita consumption."

Source: econterms

consumption set

The set of affordable consumption bundles. One way to define a consumption set is by a set of prices, one for each possible good, and a budget. Or a consumption set could be defined in a model by some other set of restrictions on the set of possible consumption bundles.
E.g. if consumer i can consume nonnegative quantities of all goods, it is standard to define xi as i's consumption set, a member of R+L where L is the number of goods. Normally if the agent is endowed with a set of goods, the endowment is in the consumption set.

Source: econterms

contingent valuation

The use of questionnaires about valuation to estimate the willingness of respondents to pay for public projects or programs.

Often the question is framed, "Would you accept a tax of x to pay for the program?" Any such survey must be carefully done, and even so there is dispute about the value of the basic method, as is discussed in the issue of the JEP with the Portney (1994) article.

Source: econterms

Continue to

reverse hindsight bias, implications for further research, theoretical explanations

Source: SFB 504

contract curve

Same as Pareto set, with the implication that it is drawn in an Edgeworth box.

Source: econterms

contraction mapping

Given a metric space S with distance measure d(), and T:S->S mapping S into itself, T is a contraction mapping if for some b ('b') in the range (0,1), d(Tx,Ty) is less than or equal to b*d(x,y) for all x and y in S.

One often abbreviates the phrase 'contraction mapping' by saying simply that T is a contraction.

The function resulting from the applications of a contraction could slope the opposite way of the original function as long as it is less steeply sloped.

A standard way to prove that an operator T is a contraction is to prove that it satisfies Blackwell's conditions.

Source: econterms

contractionary fiscal policy

A government policy of reducing spending and raising taxes.
In the language of some first courses in macroneconomics, it shifts the IS curve (investment/saving curve) to the left.

Source: econterms

contractionary monetary policy

A government policy of raising interest rates charged by the central bank.
In the language of some first courses in macroeconomics, it shifts the LM curve (liquidity/money curve) to the left.

Source: econterms

control for

As used in the following way: "The effect of X on Y disappears when we control for Z", the phrase means to regress Y on both X and Z, together, and to interpret the direct effect of X as the only effect. Here the effect of Z on X has been "controlled for". It is implied that X is not causing changes in Z.

Source: econterms

Control group

In an experimental design, which is contrasting two or more groups, the control group of subjects is not given the treatment whose effect is under investigation.

Source: SFB 504

control variable

A variable in a model controlled by an agent in order to optimize something.

Source: econterms


Multiple meanings: (1) a mathematical property of a sequence or series that approaches a value;
In macro: ''Catch-up' refers to the long-run process by which productivity laggards close the proportional gaps that separate them from the productivity leader .... 'Convergence,' in our usage, refers to a reduction of a measure of dispersion in the relative productivity levels of the array of countries under examination.' Like Barro and Sala-i-Martin (92)'s 'sigma-convergence', a narrowing of the dispersion of country productivity levels over time.

Source: econterms

convergence in quadratic mean

A kind of convergence of random variables. If xt converges in quadratic mean it converges in probability but it does not necessarily converge almost surely.

The following is a best guess, not known to be correct.
Let et be a stochastic process and Ft be an information set at time t uncorrelated with et:

E[et|Ft-m] converges in quadratic mean to zero as m goes to infinity IFF:
E[E[et|Ft-m]2] converges to zero as m goes to infinity.

Source: econterms


The convolution of two functions U(x) and V(x) is the function:
U*V(x) = (integral from 0 to x of) U(t)V(x-t) dt

Source: econterms

Cook's distance

A metric for deciding whether a particular point alone affects regression estimates much. After a regression is run one can consider for each data point how far it is from the means of the independent variables and the dependent variable. If it is far from the means of the independent variables it may be very influential and one can consider whether the regression results are similar without it.

[Need to add the equation defining the Cook's d here.]

Source: econterms

cooperative game

A game structure in which the players have the option of planning as a group in advance of choosing their actions. Contrast noncooperative game.

Source: econterms

Coordination games

Normal form game where the players have the same number of strategies, which can be indexed such that it is always a strict Nash equilibrium for both players to play strategies having the same index.

Source: SFB 504


Defined in terms of an original allocations of goods among agents with specified utility functions. The core is the set of possible reallocations such that no subset of agents could break off from the others and all do better just by trading among themselves.
Equivalently: The intersection of individually rational allocations with the Pareto efficient allocations. Individually rational, here, means the allocations such that no agent is worse off than with his endowment in the original allocation.

Source: econterms

corner solution

A choice made by an agent that is at a constraint, and not at the tangency of two classical curves on a graph, one characterizing what the agent could obtain and the other characterizing the imaginable choices that would attain the highest reachable value of the agents' objective.

A classic example is the intersection between a consumer's budget line (characterizing the maximum amounts of good X and good Y that the consumer can afford) and the highest feasible indifference curve. If the agent's best available choice is at a constraint -- e.g. among affordable bundles of good X and good Y the agent prefers quantity zero of good X -- that choice is often not at a tangency of the indifference curve and the budget line, but at a "corner"

Contrast interior solution.

Source: econterms


Two random variables are positively correlated if high values of one are likely to be associated with high values of the other. They are negatively correlated if high values of one are likely to be associated with low values of the other.

Formally, a correlation coefficient is defined between the two random variables (x and y, here). Let sx and xy denote the standard devations of x and y. Let sxy denote the covariance of x and y. The correlation coefficent between x and y, denoted sometimes rxy, is defined by:

rxy = sxy / sxsy

Correlation coefficients are between -1 and 1, inclusive, by definition. They are greater than zero for positive correlation and less than zero for negative correlations.

Source: econterms

cost curve

A graph of total costs of production as a function of total quantity produced.

Source: econterms

cost function

is a function of input prices and output quantity. Its value is the cost of making that output given those input prices. A common form: c(w1, w2, y) is the cost of making output quantity y using inputs that cost w1 and w2 per unit.

Source: econterms

cost-benefit analysis

An approach to public decisionmaking. Quotes below from Sugden and Williams, 1978 p. 236, with some reordering: 'Cost-benefit analysis is a 'scientific' technique, or a way of organizing thought, which is used to compare alternative social states or courses of action.' 'Cost-benefit analysis shows how choices should be made so as to pursue some given objective as efficiently as possible.' 'It has two essential characteristics, consistency and explicitness. Consistency is the principle that decisions between alternatives should be consistent with objectives....Cost-benefit analysis is explicit in that it seeks to show that particular decisions are the logical implications of particular, stated, objectives.' 'The analyst's skill is his ability to use this technique. He is hired to use this skill on behalf of his client, the decision-maker..... [The analyst] has the right to refuse offers of employment that would require him to use his skills in ways that he believes to be wrong. But to accept the role of analyst is to agree to work with the client's objectives.' p. 241: Two functions of cost-benefit analysis: It 'assists the decision-maker to pursue objectives that are, by virtue of the community's assent to the decision-making process, social objectives. And by making explicit what these objectives are, it makes the decision-maker more accountable to the community.' 'This view of cost-benefit analysis, unlike the narrower value-free interpretation of the decision-making approach, provides a justification for cost-benefit analysis that is independent of the preferences of the analyst's immediate client. An important consequence of this is that the role of the analyst is not completely subservient to that of the decision-maker. Because the analyst has some responsibility of principles over and above those held by the decision-maker, he may have to ask questions that the decision-maker would prefer not to answer, and which expose to debate conflicts of judgement and of interest that might otherwise comfortably have been concealed.'

Source: econterms

cost-of-living index

A cost-of-living price index measures the changing cost of a constant standard of living. The index is a scalar measure for each time period. Usually it is a positive number which rises over time to indicate that there was inflation. Two incomes can be compared across time by seeing whether the incomes changed as much as the index did.

Source: econterms


A costate variable is, in practice, a Lagrangian multiplier, or Hamiltonian multiplier.

Source: econterms

countable additivity property

the third of the properties of a measure.

Source: econterms

coupon strip

A bond can be resold into two parts that can be thought of as components: (1) a principal component that is the right to receive the principal at the end date, and (2) the right to receive the coupon payments. The components are called strips. The right to receive coupon payments is the coupon strip.

Source: econterms

Cournot duopoly

A pair of firms who split a market, modeled as in the Cournot game.

Source: econterms

Cournot game

A game between two firms. Both produce a certain good, say, widgets. No other firms do. The price they receive is a decreasing function of the total quantity of widgets that the firms produce. That function is known to both firms. Each chooses a quantity to produce without knowing how much the other will produce.

Source: econterms

Cournot model

A generalization of the Cournot game to describe industry structure. Each of N firms will choose a quantity of output. Price is a commonly-known decreasing functions of total output. All firms know N and take the output of the others as given. Each firm has a cost function ci(qi). Usually the cost functions are treated as common knowledge. Often the cost functions are assumed to be the same for all firms.

The prediction of the model is that the firms will choose Nash equilibrium output levels.

Formally, from notes given by Michael Whinston to the Economics D50-1 class at Northwestern U. on Sept 23, 1997:
Denote xi as a quantity that firm i considers,
X as the total quantity (the sum of the xi's),
xi* and X* as the Nash equilibrium levels of those quantities,
X-i as the total quantity chosen by all firms other than firm i,
and p(X) as the function mapping total quantity to price in the market.

Each firm i solves:
maxxi p(xi+X-i)-ci(xi)

The first order conditions are, for i from 1 to N:


Assuming xi* is greater than 0 for all i, then the Nash equilibrium output levels are characterized by the N equations:

p'(X*)xi* + p(X*) = ci'(xi*) for each i.

Source: econterms

covariance stationary

A stochastic process is covariance stationary if neither its mean nor its autocovariances depend on the index t.

Source: econterms

Covered short call written call

A covered short call consists of holding an underlying asset and simultaneously selling a call option (short call) of this underlying asset. Although in the literature exists the name call hedge, the covered short call is not an actual hedge strategy. It is only possible to hedge losses of the underelying asset to the amount of the option price, higher losses will be reduced to this amount. On the other hand it is not possible to participate on gains of the underlying asset, because in this case, the option will be exercised, i.e. the seller has to deliver the underlying asset.

Source: SFB 504

Cowles Commission

A 1950s, probably British, panel on econometrics which focussed attention on the problem of simultaneous equations. In some tellings of the history this had an impact on the field -- other problems such as errors-in-variables (measurement errors in the independent variables), were set aside or given lower priority elsewhere too because of the prestige and influence of the Cowles Commission.

Source: econterms


The Consumer Price Index, which is a measure of the cost of goods purchased by average U.S. household. It is calculated by the U.S. government's Bureau of Labor Statistics.

As a pure measure of inflation, the CPI has some flaws:
1) new product bias (new products are not counted for a while after the appear)
2) discount store bias (consumers who care won't pay full price)
3) substitution bias (variations in price can cause consumers to respond by substituting on the spot, but the basic measure holds their consumption of various goods constant)
4) quality bias (product improvements are under-counted)
5) formula bias (overweighting of sale items in sample rotation)

Source: econterms


The Current Population Survey (of the U.S.) is compiled by the U.S. Bureau of the Census, which is in the Dept of Commerce. The CPS is the source of official government statistics on employment and unemployment in the U.S. Each month 56,500-59,500 households are interviewed about their average weekly earnings and average hours worked. The households are selected by area to represent the states and the nation. "Each household is interviewed once a month for four consecutive months in one year and again for the corresponding time period a year later" to make month-to-month and year-to-year comparisons possible. The March CPS is special. For one thing the respondents are asked about insurance then.

Source: econterms

Cramer-Rao lower bound

Whenever the Fisher information I(b) is a well-defined matrix or number, the variance of an unbiased estimator B for b is at least as large as [I(B)]-1.

Source: econterms

criterion function

Synonym for loss function. Used in reference to econometrics.

Source: econterms

critical region

synonym for rejection region

Source: econterms

Cronbach's alpha

A test for a model or survey's internal consistency. Called a 'scale reliability coefficient' sometimes. The remainder of this definition is partial and unconfirmed.

Cronbach's alpha assesses the reliability of a rating summarizing a group of test or survey answers which measure some underlying factor (e.g., some attribute of the test-taker). A score is computed from each test item and the overall rating, called a 'scale' is defined by the sum of these scores over all the test items. Then reliability a is defined to be the square of the correlation between the measured scale and the underlying factor the scale was supposed to measure. (Which implies that one has another measure in test cases of that underlying factor, or that it's imputed from the test results.) (In Stata's examples it remains unclear what the scale is, and how it's measured; apparently alpha can be generated without having a measure of the underlying factor.)

Source: econterms

Cross-Price Elasticity of Demand

The cross-price elasticity of demand measures the sensitivity of the demand for one good to a change in the price of another good. It is calculated as:
(Percentage Change in Demand for Good X)/(Percentage Change in Price of Good Y)

If two goods are independent (that is, the price of one good has no effect on the demand for the other good), the cross-price elasticity of demand is zero.

If two goods are complements (they are typically consumed together) an increase in the price of one good will decrease the demand for the other good (and the reverse); therefore for complements the cross-price elasticity of demand is negative

On the other hand, if two goods are substitutes, an increase in the price of one good will increase the demand for the other good (and the reverse); therefore for substitutes, the cross-price elasticity of demand is positive.

Source: EconPort

cross-section data

Parallel data on many units, such as individuals, households, firms, or governments. Contrast panel data or time series data.

Source: econterms


A way of choosing the window width for a kernel estimation. The method is to select, from a set of possible window widths, one that minimizes the sum of errors made in predicting each data point by using kernel regression on the others.

Formally, let J be the number of data points, j an index to each one, from one to J, yj the dependent variable for each j, Xj the independent variables for that j, Yj the dependent variable for that j, and {hi} for i=1 to n the set of candidate window widths. The hi's might be a set of equally spaced values on a grid. The algorithm for choosing one of the hi's is:

For each candidate window width hi
..For each j from 1 to J
....Drop the data point (Xj, Yj) from the sample temporarily
....Run a kernel regression to estimate Yj using the remaining X's and Y's
....Keep track of the square of the error made in that prediction
..Sum the squares of the errors for every j to get a score for candidate window width hi
..Record that in a list as the score for hi
Select as the outcome h of this algorithm the hi with the lowest score

The grid approach is necessary because the problem is not concave. Otherwise one might try a simpler maximization e.g., with the first order conditions.
Note however that a complete execution of the cross-validation method can be very slow because it requires as many kernel regressions as there are data points. E.g. in this author's experience, the cross-validation computation for one window width on 500 data points on a Pentium-90 in Gauss took about five seconds, 1000 data points took circa seventeen seconds, but for 15000 data points it took an hour. (Then it takes another hour to check another window width; so even the very simplest choice, between two window widths, takes two hours.)

Source: econterms


Stands for Constant Relative Risk Aversion, a property of some utility functions, also said to have isoelastic form. CRRA is a synonym for CES.

Example 1: for any real a<1, u(c)=ca/a is a CRRA utility function. It is a vNM utility function.

Source: econterms


Stands for Constant Returns to Scale.

Source: econterms


Center for Research in Security Prices, a standard database of finance information at the University of Chicago. Has daily returns on NYSE, AMEX, and NASDAQ stocks.

Started in early 1970s by Eugene Fama among others. The data there was so much more convenient than alternatives that it drove the study of security prices for decades afterward. It did not have volume data which meant that volume/volatility tests were rarely done.

Source: econterms

cubic spline

A particular nonparametric estimator of a function. Given a data set {Xi, Yi} it estimates values of Y for X's other than those in the sample. The process is to construct a function that balances the twin needs of (1) proximity to the actual sample points, (2) smoothness. So a 'roughness penalty' is defined. See Hardle's equation 3.4.1 near p. 56 for exact equation. The cubic spline seems to be the most common kind of spline smoother.

Source: econterms

current account balance

The difference between a country's savings and its investment. "[If] positive, it measures the portion of a country's saving invested abroad; if negative, the portion of domestic investment financed by foreigners' savings."

Defined by the sum of the value of imports of goods and services plus net returns on investments abroad, minus the value of exports of goods and services, where all these elements are measured in the domestic currency.

Source: econterms

Copyright © 2006 Experimental Economics Center. All rights reserved. Send us feedback