

L^{1}
The set of Lebesgueintegrable realvalued functions on [0,1].

L^{2}
A Hilbert space with inner product (x,y) = integral of x(t)y(t) dt. Equivalently, L^{2} is the space of realvalued random variables that have variances. This is an infinite dimensional space.

L^{n}
is the set of continuous bounded functions with domain R^{n}

labor
"[L]abor economics is primarily concerned with the behavior of employers and employees in response to the general incentives of wages, prices, profits, and nonpecuniary aspects of the employment relationship, such as working conditions." labor

labor market outcomes
Shorthand for worker (never employer) variables that are often considered endogeneous in a labor market regression. Such variables, which often appear on the right side of such regressions: wage rates, employment dummies or employment rates.

labor productivity
Quantity of output per time spent or numbers employed. Could be measured in, for example, U.S. dollars per hour.

labor theory of value
"Both Ricardo and Marx say that the value of every commodity is (in perfect equilibrium and perfect competition) proportionaly to the quantity of labor contained in the commodity, provided this labor is in accordance with the existing standard of efficiency of production (the 'socially necessary quantity of labor'). Both measure this quantity in hours of work and use the same method in order to reduce different qualities of work to a single standard." And neither accounts well for monopoly or imperfect competition. (Schumpeter, p 23)

laboraugmenting
One of the ways in which an effectiveness variable could be included in a production function in a Solow model. If effectiveness A is multiplied by labor L but not by capital K, then we say the effectiveness variable is laboraugmenting.

LAD
Stands for 'Least absolute deviations' estimation.
LAD estimation can be used to estimate a smooth conditional median function; that is, an estimator for the median of the process given the data. Say the data are stationary {x_{t}, y_{t}}. The dependent variable is y and the independent variable is x.The criterion function to be minimized in LAD estimation for each observation t is: q(x_{t},y_{t},q) = y_{t}=m(x_{t},q)
where m() is a guess at the conditional median function.
Under conditions specified in Wooldridge, p 2657, the LAD estimator here is Fisherconsistent for parameters of the estimator of the median function.

lag operator
Denoted L. Operates on an expression by moving the subscripts on a time series back one period, so: Le_{t} = e_{t1} Why? Well, it can help manipulability of some expressions. For example it turns out one can could write an MA(2) process (which see) to look like this, in lag polynomials (which see): et = (1 + p_{1}L + p_{2}L^{2})u_{t} and then divide both sides by the lag polynomial, and get a legal, meaningful, correct expression.

lag polynomial
A polynomial expression in lag operators (which see). Example: (1  p_{1}L + p_{2}L^{2}) where L^{2} = LL, or the lag operator L applied twice. These are useful for manipulating time series. For example, one can quickly show an AR(1) is equivalent to an MA(infinity) by dividing both sides by the lag polynomial (1pL).

Lagrangian multiplier
An algebraic term that arises in the context of problems of mathematical optimization subject to constraints, which in economics contexts is sometimes called a shadow price.
A long example: Suppose x represents a quantity of something that an individual might consume, u(x) is the utility (satisfaction) gained by that individual from the consumption of quantity x. We could model the individual's choice of x by supposing that the consumer chooses x to maximize u(x):
x = arg max_{x} u(x)
Suppose however that the good is not free, so the choice of x must be constrained by the consumer's income. That leads to a constrained optimization problem ............ [Ed.: this entry is unfinished]

LAN
stands for 'locally asymptotically normal', a characteristic of many ('a family of') distributions.

large sample
Usually a synonym for 'asymptotic' rather than a reference to an actual sample magnitude.

Laspeyres index
A price index following a particular algorithm.
It is calculated from a set ('basket') of fixed quantities of a finite list of goods. We are assumed to know the prices in two different periods. Let the price index be one in the first period, which is then the base period. Then the value of the index in the second period is equal to this ratio: the total price of the basket of goods in period two divided by the total price of exactly the same basket in period one.
As for any price index, if all prices rise the index rises, and if all prices fall the index falls.

Law of iterated expectations
Often exemplified by E_{t}E_{t+1}(.) = E_{t}(.) That is, "one cannot use limited information [at time t] to predict the forecast error one would make if one had superior information [at t+1]."  Campbell, Lo, and MacKinlay, p 23.

LBO
Leveraged buyout. The act of taking a public company private by buying it with revenues from bonds, and using the revenues of the company to pay off the bonds.

Learning process
Consider a repeated play of a finite game. In each period,
every player observes the history of past actions, and forms a belief about the other
players? strategies. He then chooses a best response according to his belief about
the other players? strategies. We call such a process a learning process.

least squares learning
The kind of learning that an agent in a model exhibits by adapting to past data by running least squares on it to estimate a hypothesized parameter and behaving as if that parameter were correct.

leisure
In some models, individuals spend some time working and the rest is lumped into a category called leisure, the details of which are usually left out.

lemons model
Describes models like that of Akerlof's 1970 paper, in which the fact that a good is available suggests that it is of low quality. For example, why are used cars for sale? In many cases because they are "lemons," that is, they were problematic to their previous owners.

Leontief production function
Has the form q=min{x1,x2} where q is a quantity of output and x1 and x2 are quantities of inputs or functions of the quantities of inputs.

leptokurtic
An adjective describing a distribution with high kurtosis. 'High' means the fourth central moment is more than three times the second central moment; such a distribution has greater kurtosis than a normal distribution. This term is used in BollerslevHodrick 1992 to characterize stock price returns. Lepto means 'slim' in Greek and refers to the central part of the distribution.

Lerman ratio
A government benefit to the underemployed will presumably reduce their hours of work. The ratio of the actual increase in income to the benefit is the Lerman ratio, which is ordinarily between zero and one. Moffitt (1992) estimates it in regard to the U.S. AFDC program at about .625.

Lerner index
A measure of the profitability of a firm that sells a good: (price  marginal cost) / price.
One estimate, from Domowitz, Hubbard, and Petersen (1988) is that the average Lerner index for manufacturing firms in their data was .37.

leverage ratio
Meaning differs by context. Often: the ratio of debts to total assets. Can also be the ratio of debts (or longterm debts in particular, excluding for example accounts payable) to equity.
Normally used to describe a firm's but could describe the accounts of some other organization, or an individual, or a collection of organizations.

Leviathan
The allpowerful kind of state that Hobbes thought "was necessary to solve the problem of social order."  Cass R. Sunstein, "The Road from Serfdom" The New Republic Oct 20, 1997, p 37.

Liability of newness
The liability of newness phenomenon describes the different risks of dying of an
organization during its life course. It states that at the point of founding of an
organization the risk of dying is highest and decreases with growing age of the
organization. There are basicly three reasons why this might be the case
(see Stinchcombe, 1965):
• New organizations which are acting in new areas ask for new roles to be performed
by their members. The learning of the new roles takes time and leads to economic inefficencies.
• Trust among the organizational members has yet to be developed since in most cases
the new employees of a firm do not know each other when the organization is founded.
• New organizations have not yet built stable portfolios of clients.
These considerations can  at least in some respects  also apply to the new rules of
an organization. A new rule also implies new roles that have to be learned and members
have to develop trust towards the new rule. According to this theoretical concept a new
organizational rule should also have its highest risk of beeing abolished just after its
founding (see Schulz, 1993).

Lifecycle hypothesis
The lifecycle hypothesis presents a welldefined linkage between the
consumption plans of an individual and her
income and income expectations as she passes from childhood, through the
work participating years, into retirement
and eventual decease. Early attempts to establish such a linkage were made by Irving
Fisher (1930) and again by
Harrod (1948) with his notion of hump
saving, but a sharply defined hypothesis which carried the argument forward both theoretically
and empirically with its range of wellspecified tests for crosssection and time series
evidence was first advanced by Modigliani & Brumberg (1954).
Both their paper and advanced copies of the permanent income theory of Milton
Friedman (1957) were circulating in 1953.
Both the ModiglianiBrumberg and the Friedman theories are referred to as lifecycle theories.
The main building block of lifecycle models is the saving
decision, i.e., the division of income between consumption and saving. The saving decision
is driven by preferences between present and
future consumption (or the utility derived from consumption).
Given the income stream the household receives over time, the sequence of optimal
consumption and saving decisions over the entire life can be computed. Note that
the standard lifecycle model as presented here is
firmly grounded in expected utility theory and assumes
rational behavior.
The typical shape of the income profile over the life cycle starts with low income
during the early years of the working life, then income increases until a peak is reached
before retirement, while pension income during retirement is substantially lower.
To make up for the lower income during retirement and to avoid a sharp drop
in utility at the point of retirement, individuals will save some fraction of their income
during their working life and dissave during retirement. This results in a humpshaped
savings profile over the life cycle ? the main prediction of the lifecycle theory.
Unfortunately, this prediction does not hold in actual household behavior. It is fair to say
the reasons for this failure of the simple lifecycle model are still not understood.
Rodepeter & Winter (1998)
provide empirical evidence for Germany and discuss some extensions of the lifecycle model
that might help to understand actual savings behavior. An important direction of
current research tries to apply elements of behavioral economics
to lifecycle savings decisions.

Lifecycle hypothesis a review of the literature
This review of the literature on lifecycle consumption and
saving decisions is adapted from
Fisher (1987).
The lifecycle hypothesis presents a welldefined linkage between the
consumption
plans of an individual and her income
and income expectations as she passes from childhood, through the
work participating years, into
retirement
and eventual decease. Early attempts to establish
such a linkage were made by Irving
Fisher (1930)
and again by
Harrod (1948)
with his notion of hump
saving,
but a sharply defined hypothesis which carried the argument forward both theoretically
and empirically with its range of wellspecified tests for crosssection and time series
evidence was first advanced by
Modigliani & Brumberg (1954).
Both their paper and advanced copies of the permanent income theory of Milton
Friedman (1957)
were circulating in 1953 and led M.R.
Fisher (1956)
to carry out tests of the theories even preceding publication of Friedman´s
work. Both the ModiglianiBrumberg and the Friedman Theories are
referred to as lifecycle theories and they certainly have many similar implications, but
the one that is more closely related to the life cycle with emphasis on age ? Modigliani and
Brumberg ? is the one to which the following review concentrates.
The key which rendered the multiperiod analysis tractable under subjective certainty was
the specification that the lifetime
utility
function be homothetic ? this permitted planned
consumption for each future period to be written as a function of expected wealth as seen at
the planning date, the functional parameters being in no way dependent upon wealth, but upon
age and tastes. The authors further sharpened their hypothesis. They specified that an
individual would plan to consume the same amount in real discounted terms each year.
Throughout, desired bequest and initial assets were set to zero. However, the authors did
show that bequests could be accounted for within the homothetic utility function itself if that
became necessary.
From the outset, such sharp hypothesis was desired for empirical testing. For Modigliani
at least, a propelling influence had been the debate about the explanatory power of the
Keynesian consumption function
for forecasting postwar consumption and income. The
inadequacies revealed had led already to several refined theories, notably by
Duesenberry (1949)
and by
Modigliani (1949)
himself. In the 1940s, crosssection studies had been
carefully carried through at the National Bureau of Economic Research (NBER),
and empirical results from these studies were promoting
theoretical insights. Any new theory had to be consistent with these findings.
The tighter specification of the hypothesis enabled the spelling out of the pattern of
accumulating savings in the working years to finance the retirement years ? hump savings.
Assuming that real income of each member of the populationwide sample remained the same
throughout working life, it was shown that the independent of the age and income
distribution and dependent only on the proportion of retirement years to expected lifetime.
This alerted economists to the fact that crosssection results do not directly translate
into estimates of the marginal propensity to save of an individual planning function. This
insight is of broader significance not confined to the simple hypothesis.
The implications of the hypothesis for time series analysis were disseminated much more
slowly as the companion paper to that on cross section interpretation was never published,
accounts not being freely available until 1963 and the original text itself not until 1980.
Real consumption, including the depreciation of durable goods, is a proportion of expected
real wealth, and wealth is the addition of initial assets at the planning date, current
income and expected (discounted) future income. By then assuming that the proportionality
factor referred to is identical across individuals, they devised an aggregate
relation for each and every age group. Next they proceed to aggregate across age groups.
Here the proportionality factor, depending as it does on age, is not independent of assets,
and bias may be introduced, if the strictest set of assumptions used in the crosssection
analysis is employed, the authors show that when aggregated real income follows an exponential
growth trend the parameters of the aggregate relation remain constant over time. They are,
however, sensitive to the magnitude of the growth rate of real income (a sum of growth rates
in productivity and population), the savingincome ratio being larger the greater the rate of
income growth.
If income and/or assets at any time move out of line with previous planning expectations,
plans can be revised. Suppose income rises, yet income expectations are not revised, the change
being viewed as an onoff event. Then the individual marginal propensity to save at that date
would rise to finance subsequent consumption at a higher level until death. If income
expectations were revised upwards permanently, then the marginal propensity to save
would also rise but to a lesser degree than in the onoff case as higher consumption can
more easily be provided for out of laterperiod incomes. Allowance for income variability
is straightforward in cross section; with time series expected income, here labor income,
may be set equal to a weighted average of aggregated past and expected future income, or
subdivided according to whether the reference is to employed or unemployed consumers at
any time
(Modigliani & Ando (1963)).

LIFT
Acronym for "Let It Function Today"  a concept very comparable to
rationality (for a repeated
discussion see Bogart, 1985):
•Everyone believes it exists, although some
pessimist critics say
it consists only theoretically and has no everyday
value whatsoever
(e.g. Lotterbottel, 1983);
•It has just one entry (the economist
view), but still people can get into it coming from very different
places or levels. Superficially, these
levels all look the same (red), but they
really are dependent on context factors (the
psychologist view);
•It is at the core of the
SonderForschungsBereich. However, there
will never be more than four people being able to
use it at the
same time. The probability is high that this also
is the time
when the bell rings and the concept breaks down
(Hausmeister,
1952);
•People are really into it and they talk a lot
about it (Funk &
Stoer, 1997). Behavioral observation has proved,
however, that
in fact nobody gets in (although some people report
spiritual
experiences of "being in a flightlike state" or
"getting closer
to the heavenly Geschäftsstelle" or being
"lifted up", while others
believe in "the key"). Instead people circle around
it using the
dissatisficing strategy of climbing the
stairs of experimental simulation;
•It is supposed to work perfectly, but it
could happen that at
some point it wouldn't. Therefore it is not worked
with preventively.
As one result the concept just never works, as a
second result
people sweat a whole lot;
•There is some speculation about what would happen if it worked
at some point, but empirical evidence for these theories is still weak
(Autorenkollektiv, 1997).

likelihood function
In maximum likelihood estimation, the likelihood function (often denoted L()) is the joint probability function of the sample, given the probability distributions that are assumed for the errors. That function is constructed by multiplying the pdf of each of the data points together: L(q) = L(q; X) = f(X; q) = f(X_{0};q)f(X_{1};q)...f(X_{N};q)

Limdep
A program for the econometric study of limited dependent variables. Limdep's web site is at 'http://www.limdep.com'.

limited dependent variable
A dependent variable in a model is limited if it is discrete (can take on only a countable number of values) or if it is not always observed because it is truncated or censored.

LIML
stands for Limited Information Maximum Likelihood, an estimation idea

LindebergLevy Central Limit Theorem
For {w_{t}} an iid sequence, Ew_{t}=mu, and var(w_{t})=s^{2}: Let W=the average of the T w_{t}'s. Then: T^{1/2}(Wmu)/s converges in distribution as T goes to infinity to a N(0,1) distribution

linear algebra
linear algebra

linear model
An econometric model is linear if it is expressed in an equation which the parameters enter linearly, whether or not the data require nonlinear transformations to get to that equation.

linear pricing schedule
Say the number of units, or quantity, paid for is denoted q, and the total paid is denoted T(q), following the notation of Tirole. A linear pricing schedule is one that can be characterized by T(q)=pq for some priceperunit p.
For alternative pricing schedules see nonlinear pricing or affine pricing schedule.

linear probability models
Econometric models in which the dependent variable is a probability between zero and one. These are easier to estimate than probit or logit models but usually have the problem that some predictions will not be in the range of zero to one.

Linear separability
The method typically used to combine the attribute weights was
adapted from
Tversky's (1977)
contrast model of similarity. The
attribute weights are assumed to be independent and
combined by
adding (that means they are linearly
separable).

link function
Defined in the context of the generalized linear model, which see.

Lipschitz condition
A function g:R>R satisfies a Lipschitz condition if g(t^{1})g(t^{2}) <= Ct^{1}t^{2} for some constant C. For a fixed C we could say this is "the Lipschitz condition with constant C."
A function that satisfies the Lipschitz condition for a finite C is said to be Lipschitz continuous, which is a stronger condition than regular continuity; it means that the slope so steep as to be outside the range (C, C).

Lipschitz continuous
A function is Lipschitz continuous if it satisfies the Lipschitz condition for a finite constant C. Lipschitz continuity is a stronger condition than regular continuity. It means that the slope is never outside the range (C, C).

liquid
A liquid market is one in which it is not difficult or costly to buy or sell.
More formally, Kyle (1985), following Black (1971), describes a liquid market as "one which is almost infinitely tight, which is not infinitely deep, and which is resilient enough so that prices eventually tend to their underlying value."

liquidity
A property of a good: a good is liquid to the degree it is easily convertible, through trade, into other commodities. Liquidity is not a property of the commodity itself but something established in trading arrangements.

liquidity constraint
Many households, e.g. young ones, cannot borrow to consume or invest as much as they would want, but are constrained to current income by imperfect capital markets.

liquidity trap
A Keynesian idea. When expected returns from investments in securities or real plant and equipment are low, investment falls, a recession begins, and cash holdings in banks rise. People and businesses then continue to hold cash because they expect spending and investment to be low. This is a selffulfilling trap.
See also Keynes effect and Pigou effect.

Literature
:
Gibbons (1992)

LjungBox test
Same as portmanteau test.

locally identified
Linear models are either globally identified or there are an infinite number of observably equivalent ones. But for models that are nonlinear in parameters, "we can only talk about local properties." Thus the idea of locally identified models, which can be distinguished in data from any other 'close by' model. "A sufficient condition for local identification is that" a certain Jacobian matrix is of full column rank.

locally nonsatiated
An agent's preferences are locally nonsatiated if they are continuous and strictly increasing in all goods.

log
In the context of economics, log always means 'natural log', that is log_{e}, where e is the natural constant that is approximately 2.718281828. So x=log y <=> e^{x}=y.

log utility
A utility function. Some versions of this are used often in finance. Here is the simplest version. Define U() as the utility function and w as wealth. a is a positive scalar parameter. U(w) = ln^{w}
is the log utility function.

logconcave
A function f(w) is said to be logconcave if its natural log, ln(f(w)) is a concave function; that is, assuming f is differentiable, f''(w)/f(w)  f'(w)^{2} <= 0 Since log is a strictly concave function, any concave function is also logconcave. A random variable is said to be logconcave if its density function is logconcave. The uniform, normal, beta, exponential, and extreme value distributions have this property. If pdf f() is logconcave, then so is its cdf F() and 1F(). The truncated version of a logconcave function is also logconcave. In practice the intuitive meaning of the assumption that a distribution is logconcave is that (a) it doesn't have multiple separate maxima (although it could be flat on top), and (b) the tails of the density function are not "too thick". An equivalent definition, for vectorvalued random variables, is in Heckman and Honore, 1990, p 1127. Random vector X is logconcave iff its density f() satisfies the condition that f(ax_{1}+(1a)x_{2})≥[f(x_{1})]^{a}[f(x_{2})]^{(1a)} for all x_{1}, and x_{2} in the support of X and all a satisfying 0≤a≤1.

logconvex
A random variable is said to be logconvex if its density function is logconcave. Pareto distributions with finite means and variances have this property, and so do gamma densities with a coefficient of variation greater than one. [Ed.: I do not know the intuitive content of the definition.] A logconvex random vector is one whose density f() satisfies the condition that f(ax_{1}+(1a)x_{2}) ≤ [f(x_{1})]^{a}[f(x_{2})]^{(1a)} for all x_{1}, and x_{2} in the support of X and all a satisfying 0≤a≤1.

Logic of conversation
Inferring the pragmatic meaning of a semantic utterance requires to go beyond the
information given.
"In making these inferences, speakers and listeners rely on a set of tacit assumptions
that govern the conduct of conversation in everyday life"
(Schwarz, 1994, p. 124).
According to
Grice (1975)
these assumptions can be expressed by four maxims which constitute the "cooperative principle".
"First, a maxim of quantity demands that contributions are as informative as required, but
not more informative than required. Second, a maxim of quality requires participants to
provide no information they believe is false or lack adequate evidence for. Third, according
to a maxim of relation, contributors need to be relevant for the aims of the ongoing
interaction. Finally, a maxim of manner states that contributors should be clear, rather
than obscure or ambiguous"
(Bless, Strack & Schwarz, 1993, p. 151).
These maxims have been demonstrated to have a pronounced impact of how individuals perceive
and react to semantically presented social situations and problem scenarios.

logistic distribution
Has the cdf F(x) = 1/(1+e^{x}) This distribution is quicker to calculate than the normal distribution but is very similar. Another advantage over the normal distribution is that it has a closed form cdf. pdf is f(x) = e^{x}(1+e^{x})^{2} = F(x)F(x)

logit model
A univariate binary model. That is, for dependent variable y_{i} that can be only one or zero, and a continuous indepdendent variable x_{i}, that: Pr(y_{i}=1)=F(x_{i}'b) Here b is a parameter to be estimated, and F is the logistic cdf. The probit model is the same but with a different cdf for F.

lognormal distribution
Let X be a random variable with a standard normal distribution. Then the variable Y=e^{X} has a lognormal distribution. Example: Yearly incomes in the United States are roughly lognormally distributed.

longitudinal data
a synonym for panel data

Lorenz curve
used to discuss concentration of suppliers (firms) in a market. The horizontal axis is divided into as many pieces as there are suppliers. Often it is given a percentage scale going from 0 to 100. The firms are in order of decreasing size. On the vertical axis are the market sales in percentage terms from 0 to 100. The Lorenz curve is a graph of the sales of all the firms to the right of each point on the horizontal axis.
So (0,0) and (100,100) are the endpoints on the Lorenz curve and it is weakly convex, and piecewise linear, between. See also Gini coefficient.

loss function
Or, 'criterion function.' A function that is minimized to achieve a desired outcome. Often econometricians minimize the sum of squared errors in making an estimate of a function or a slope; in this case the loss function is the sum of squared errors. One might also think of agents in a model as minimizing some loss function in their actions that are predicated on estimates of things such as future prices.

lower hemicontinuous
No appearing points

LRD
Longitudinal Research Database, at the U.S. Bureau of the Census. Used in the study of labor and productivity. The data is not publicly available without special certification from the Census. The LRD extends back to 1982.

Lucas critique
A criticism of econometric evaluations of U.S. government policy as they existed in 1973, made by Robert E. Lucas. "Keynesian models consisted of collections of decision rules for consumption, investment in capital, employment, and portfolio balance. In evaluating alternative policy rules for the government,.... those private decision rules were assumed to be fixed.... Lucas criticized such procedures [because optimal] decision rules of private agents are themselves functions of the laws of motion chosen by the government.... policy evaluation procedures should take into account the dependence of private decision rules on the government's ... policy rule." In Cochrane's language: "Lucas argued that policy evaluation must be performed with models specified at the level of preferences ... and technology [like discount factor beta and permanent consumption c^{*} and exogenous interest rate r], which presumably are policy invariant, rather than decision rules which are not." [I believe the canonical example is: what happens if government changes marginal tax rates? Is the response of tax revenues linear in the change, or is there a Laffer curve to the response? Thus stated, this is an empirical question.]
