Sometimes defined to be the ratio of expenditures by a firm on research and development to the firm's sales.
Usually written R2. Is the square of the correlation coefficient between the dependent variable and the estimate of it produced by the regressors, or equivalently defined as the ratio of regression variance to total variance.
Results from a government's choice in certain kinds of models. Suppose that the government knows how private sector producers will respond to any economic environment, and that the government moves first, choosing some aspect of the environment. Suppose further that the government makes its choice in order to maximize a utility function for the population. Then the government's choice is a Ramsey problem and its solution pays off with the ?>Ramsey outcome?>.
The payoffs from a Ramsey equilibrium.
See Ramsey equilibrium.
Not completely predetermined by the other variables available.
Examples: Consider the function plus(x,y) which we define to have the value x+y. Every time one applies this function to a given x and y, it would give the same answer. Such a function is deterministic, that is, nonrandom.
Consider by contrast the function N(0,1) which we define to give back a draw from a standard normal distribution. This function does not return the same value every time, even when given the same parameters, 0 and 1. Such a function is random, or stochastic.
random effects estimation
The GLS procedure in the context of panel data.
Fixed effects and random effects are forms of linear regression whose understanding presupposes an understanding of OLS.
In a fixed effects regression specification there is a binary variable (also called dummy or indicator variable) marking cross section units and/or time periods. If there is a constant in the regression, one cross section unit must not have its own binary variable marking it.
From Kennedy, 1992, p. 222:
'In the random effects model there is an overall intercept and an error term with two components: eit + ui. The eit is the traditional error term unique to each observation. The ui is an error term representing the extent to which the intercept of the ith cross-sectional unit differs from the overall intercept. . . . . This composite error term is seen to have a particular type of nonsphericalness that can be estimated, allowing the use of EGLS for estimation. Which of the fixed effects and the random effects models is better? This depends on the context of the data and for what the results are to be used. If the data exhaust the population (say observations on all firms producing automobiles), then the fixed effects approach, which produces results conditional on the units in the data set, is reasonable. If the data are a drawing of observations from a large population (say a thousand individuals in a city many times that size), and we wish to draw inferences regarding other members of that population, the fixed effects model is no longer reasonable; in this context, use of the random effects model has the advantage that it saves a lot of degrees of freedom. The random effects model has a major drawback, however: it assumes that the random error associated with each cross-section unit is uncorrelated with the other regressors, something that is not likely to be the case. Suppose, for example, that wages are being regressed on schooling for a large set of individuals, and that a missing variable, ability, is thought to affect the intercept; since schooling and ability are likely to be correlated, modeling this as a random effect will create correlation between the error and the regressor schooling (whereas modeling it as a fixed effect will not). The result is bias in the coefficient estimates from the random effect model.'
[Kennedy asserts, then, that fixed and random effects often produce very different slope coefficients.]
The Hausman test is one way to distinguish which one makes sense.
Synonym for stochastic process.
A nondeterministic function. See random.
A random walk is a random process yt like:
where m is a constant (the trend, often zero) and et is white noise.
A random walk has infinite variance and a unit root.
defines the Cramer-Rao lower bound, which see. (would like to put equation from Hogg and Craig p 372 here)
An adjective. Has several definitions.:
(1) characterizing behavior that purposefully chooses means to achieve ends (as in Landes, 1969/1993, p 21).
(2) characterizing preferences which are complete and transitive, and therefore can be represented by a utility function (e.g. Mas-Colell).
(3) characterizing a thought process based on reason; sane; logical. Can be used in regard to behavior. (e.g. American Heritage Dictionary, p 1028)
In economics, rational behavior in economics means that individuals maximize some objective
function (e.g., their utility function) under the constraints they face.
The concept of rational behavior has ? in addition to making the analysis of
individual behavior a good deal more tractable than a less structured assumption
would permit ? two interpretations. First, it allows to derive optimal economic
behavior in a normative sense. Second, models of rational behavior can be
used to explain and predict actual (i.e., observed) economic behavior.
An assumption in a model: that the agent under study uses a forecasting mechanism that is as good as is possible given the stochastic processes and information available to the agent.
Often in essence the rational expectations assumption is that the agent knows the model, and fails to make absolutely correct forecasts only because of the inherent randomness in the economic environment.
The option of an agent not to acquire or process information about some realm. Ordinarily used to describe a citizen's choice not to pay attention to political issues or information, because paying attention has costs in time and effort, and the effect a citizen would have by voting per se is usually zero.
In a noncooperative game, a strategy of player i is rationalizable iff it is a best response to a possible set of actions of the other players, where those actions are best responses given beliefs that those other players might have.
By rationalizable we mean that i's strategy can be justified in terms of the other players choosing best responses to some beliefs (subjective probability distributions) that they may be conjectured to have.
Nash strategies are rationalizable.
For a more formal definition see sources. This is a rough paraphrase.
verb, meaning: to take an observed or conjectured behavior and find a model environment in which that behavior is an optimal solution to an optimization problem.
A computer program for the statistical analysis of data, especially time series. Name stands for Regression Analysis of Time Series. First chapter of its manual has a nice tutorial.
The software is made by Estima Corp.
stands for Real Business Cycle (which see) -- a class of macro theories
real business cycle theory
A class of theories explored first by John Muth (1961), and associated most with Robert Lucas. The idea is to study business cycles with the assumption that they were driven entirely by technology shocks rather than by monetary shocks or changes in expectations.
Shocks in government purchases are another kind of shock that can appear in a pure real business cycle (RBC) model. Romer, 1996, p 151
An effect of production or transactions on outside parties that affects something entering their production or utility functions directly.
Recent developments in theory and practice
Emphasizing that the young and old coexist at
any time, overlapping generation models (of which Modigliani and Brumberg are now seen to be
a special case) have been fruitful in depicting the equilibrium pattern of growth in an
economy over time, in bringing into sharp relief the role of interest rates, and in weighing
the welfare contribution of security and private market saving schemes. They have also
sharpened up the treatment of bequests, both anticipated and accidental
They have lent themselves to simulation studies but have not proved rewarding for tests
against empirical data.
Also, models of dynamic labor supply have been developed in a life-cycle hypothesis framework; see
Getz & Becker (1975) and
Recent applications and extensions have related to the rapid development of social security
and its effects on private savings, and variation of dates of
(Feldstein (1974) and
on the one hand and effects of switch from income or capital taxes to
consumption taxes on the other
The social security studies have necessitated
the use of more carefully defined wealth and income figures.
For Germany, these problems are discussed in
Börsch-Supan et al. (1999).
The empirical research on the life-cycle hypothesis has raised questions as to
the adequacy of the life-cycle model without much more attention to bequest issues or
allowance for uncertainty as to date of death
Rodepeter & Winter, 1998). In part it is argued that the life cycle may
apply to a large section of the population but the big savers and even the lowest earners
may obey different criteria
(Kotlikoff & Summers (1981)).
Repeatedly in well-defined samples,
through not in all, the decline in wealth with age was not significant; in more finely
grouped data by cohorts it even rises with age.
A recession is defined to be a period of two quarters of negative GDP growth.
Thus: a recession is a national or world event, by definition. And statistical aberrations or one-time events can almost never create a recession; e.g. if there were to be movement of economic activity (measured or real) around Jan 1, 2000, it could create the appearance of only one quarter of negative growth. For a recession to occur the real economy must decline.
The reduced form of an econometric model has been rearranged algebraically so that each endogenous variable is on the left side of one equation, and only predetermined variables (exogenous variables and lagged endogenous variables) are on the right side.
The reference point is the individual's point of comparison, the "status
quo" against which alternative scenarios are contrasted. This can be
today's wealth or whatever measure of wealth that is psychologically
important to the individual.
The reference point is an important element of the value function in
Kahneman and Tversky's prospect theory. Taking value as a function of
wealth, the Kahneman-Tversky value function is upward sloping
everywhere, but with an abrupt decline in slope at the reference point.
For wealth levels above the reference point, the value function is
concave downward, just as are conventional utility functions. At the
reference point, the value function may be regarded, from the fact that
its slope changes abruptly there, as infititely concave downward. For
wealth levels below the reference point, Kahneman and Tversky found
evidence that the value function is concave upward
As a consequence of such a functional form, the risk attitude
of decision makers will depend on whether they are in a win or a loss situation
relative to their reference point. People become risk lovers in loss situations and risk
averters in win situations.
In behavioral finance this kind of value function is utilized
to explain the so called disposition effect: the phenomenon that investors are
reluctant to realize their losses but sell winners too early.
Either a sharpening of the concept of strategic (or, Nash) equilibrium, or another criterion to
discard implausible and to select plausible equilibria when a game exhibits multiple equilibria.
For example, symmetric or Pareto efficient equilibria may more plausibly
be played by the players in favor of asymmetric or inefficient equilibria. Likewise, equilibrium
outcomes that are 'focal' in the cultural and psychological context in which the game is played
might be more plausible than those which lack such salient features.
Preferring symmetric outcomes in many games leads to the selection of an equilibrium in mixed
strategies. In the following, we give an idea of the basic modifications of Nash equilibrium in
more complex games.
The reflection effect (Tversky & Kahneman, 1981) refers to having opposite preferences
for gambles differing in the sign of the outcomes (i.e. whether the outcomes are gains or
losses). Reflection effects involve gambles whose outcomes are opposite in sign, although
they do have the same magnitude. For example, most people would choose a certain gain of
$20 over a one-third chance of gaining $60. But they would choose a one-third chance of
losing $60 (and two-thirds chance of losing nothing) over a certain loss of $20.
The outcomes actually involve different domains (gain versus loss), that is, they
differ in sign (+$20 versus -$20).
The difference between reflection and framing effect is that in the framing effect the actual domain does not change (Fagley, 1993); the same outcome is phrased to appear to involve the other domain. So a loss of $20 might be framed to seem like a gain (as when an even larger loss was expected). Framing may cause it to seem like a gain, but it remains, objectively, a loss.
Reflection and framing effects are both predicted in prospect theory by the S shape of the value function: concave for gains indicating risk aversion and convex for losses indicating risk seeking.
A regression function describes the relationship between dependent variable Y and explanatory variable(s) X. One might estimate the regression function m() in the econometric model
Yi = m(Xi) + ei
where the ei are the residuals or errors. As presented that is a nonparametric or semiparametric model, with few assumptions about m(). If one were to assume also that m(X) is linear in X one would get to a standard linear regression model:
Yi = (Xi)b + ei
where the vector b could be estimated.
consumption items that to not directly produce utility, such as health maintenance, transportation to work, and "waiting times"
A U.S. Federal Reserve System rule limiting the interest rates that U.S. banks and savings and loan institutions could pay on deposits.
Insurance purchased by an insurer, often to protect against especially large risks or risks correlated to other risks the insurer faces.
In hypothesis testing. Let T be a test statistic. Possible values of T can be divided into two regions, the acceptance region and the rejection region. If the value of T comes out to be in the acceptance region, the null hypothesis (the one being tested) is accepted, or at any rate not rejected. If T falls in the rejection region, the null hypothesis is rejected.
The terms 'acceptance region' and 'rejection region' may also refer to the subsets of the sample space that would produce statistics T that go into the acceptance region or rejection region as defined above.
Reliability refers to the accuracy and consistency of a measurement or test; i.e. if a
repetition of the measurement or testing under the same conditions reveals the same
results. Note that reliability contains no information whether the behaviour or
characteristic that is measured is the intended one.
Rents are returns in excess of the opportunity cost of the resources devoted to the activity.
'Super'-game where a fixed group of players plays a given game repeatedly, with the outcome
of all previous plays observed before the next play begins. Repetition vastly enlargens the set
of possible equilibrium outcomes in a game, as it opens possibilities to 'punish' or 'reward' later
actions such that certain strategies form an equilibrium which would not form one in the single,
unrepeated ('one-shot') game. For example, repeating the prisoners' dilemma game (often enough)
gives rise to many equilibria where both prisoners never confess.
Representation problem representation
The cognitive representation of (social) information has been an important concern in (social) psychology since the mid-1970s. The central assumptions are, that people often attend to exposed information about a (social) stimulus (a person, an object, or an event) selectively, focussing on some features while disregarding others. They interpret the features in terms of previously acquired concepts and knowledge. Moreover, they often infer characteristics of the stimulus that were not actually mentioned in the information, and construe relations among these characteristics that were not specified ("going beyond the information
given", Bruner, 1957b).
In short, the cognitive representations that people form of a stimulus differ in a variety of ways from the information on which they were based.
Yet it is ultimately these representations, and not the original stimuli, that govern subsequent thoughts, judgments, and behaviors. Consequently, it is important to understand the nature of these mediating cognitive representations, to predict the influence of information on perceivers' judgments and/or behavioral decisions about the people and objects to which it refers.
To understand the cognitive determinants of judgments and decisions one must scrutinize the cognitive operations that were performed on information when it was first received, the mental representations that are formed as a result of these operations, and the manner in which these representations were later used to produce judgments or behaviors.
What is the probability that person A (Steve, a very shy and withdrawn man) belongs to group
B (librarians) or C (exotic dancers)? In answering such questions, people typically evaluate
the probabilities by the degree to which A is representative of B or C (Steve´s shyness seems
to be more representative for librarians than for exotic dancers) and sometimes neglect base
rates (there are far more exotic dancers than librarians in a certain sample).
resale price maintenance
The effect of rules imposed by a manufacturer on wholesale or retail resellers of its own products, to prevent them from competing too fiercely on price and thus driving profits down from the reselling activity. The manufacturer may do this because it wishes to keep resellers profitable. Such contract provisions are usually legal under US law but have not always been allowed since they formally restrict free trade.
reservation wage property
A model has the reservation wage property if agents seeking employment in the model accept all jobs paying wages above some fixed value and reject all jobs paying less.
Minimal amount that has to be bid in order that the the bid-taker concedes his
property rights for the object to the highest bidder. If the highest bid fails to reach
at least the reserve price, the auctioneer keeps the object (abstains from a sale).
Although reserve prices reduce the probability of a sale, they can improve the
seller's expected returns because they force bidders with higher valuations to bid
more than they otherwise would. Appropriately designed reserve prices thus are
devices to extract more of the bidders' information rents (see the entry on rents).
The agent who receives the remainder of a random amount once predictable payments are made.
The most common example: consider a firm with revenues, suppliers, and holders of bonds it has issued, and stockholders. The suppliers receive the predictable amount they are owed. The bondholders receive a predictable payout -- the debt, plus interest. The stockholders can claim the residual, that is, the amount left over. It may be a negative amount, but it may be large. The same idea of a residual claimant can be applied in analyzing other contracts. There is a historical link to theories about wages; see http://britannica.com/bcom/eb/article/9/0,5716,109009+6+106209,00.html
An attribute of a market.
In securities markets, depth is measured by "the speed with which prices recover from a random, uninformative shock." (Kyle, 1985, p 1316).
An abbrevation for the Review of Economics and Statistics.
An estimate of parameters taken with the added requirement that some particular hypothesis about the parameters is true. Note that the variance of a restricted estimated can never be as low as that of an unrestricted estimate.
assumption about parameters in a model
An abbreviation for the journal Review of Economic Studies.
Retention of central tendencies
provides a summary of a category in terms of the central
tendencies of the members of that category rather than in terms
of the representations of individual instances (which is
Retirement decisions are an increasingly important aspect of
behavior from an applied point of view. In the
of an individual, retirement is the point in life from which on no
is received anymore; the income an individual receives after retirement stems
from some social security system or from her own
The retirement decision is important because it the timing of retirement determines the amount
of saving and dis-saving during both working life and old age, and hence the aggregate level of
saving in an economy. Also, retirement decisions affect the financial
situation of the social security system that provides pensions.
The point of retirement affects the balance of years during which the
individual contributes to the system and the number of years during
which she receives a pension.
In the most simple case (where complications such as partial retirement
are ignored), retirement is a classical example for an
intertemporal decision under
uncertainty. The main source of uncertainty is, of course,
the point of death, because the individual has to assess the remaining life-time
utility that she can derive from the choices (whether to
retire or not in a given year) she has. Formally,
the basic intertemporal trade-off is to compare the present
value of future utility when retiring now with the utility of working
at least one year longer and retiring then. If the individual does not retire
now, she faces the same decision again next year. Therefore, the mathematical
formulation of the problem has a recursive structure, a fact that
makes the problem more tractable.
Of particular interest in applied work are the incentives to retirement
that are provided by the institutional arrangements of the social security
systems. In empirical studies it has been shown that individuals react
quite strongly to these incentives (e.g.,
Börsch-Supan & Schnabel, 1997),
and this in turn can be seen as evidence for
in a seemingly quite complicated decision situation.
Revelation mechanism revelation principle
A particular mechanism representing a game of
incomplete information where the players act simultanously, and where each player's
action only consists of a report about his type,
i.e. private information. In a revealing equilibrium of a revelation mechanism,
for each player the incentive constraints for each type not to mimic another
one are met, as well as the constraints of individual rationality that each
type at least earning his reservation utility.
To any equilibrium of a game of incomplete information,
there corresponds an associated revelation mechanism that has an equilibrium where
the players truthfully report their types.
That truth-telling, direct revelation mechanisms can generally be designed to achieve the Nash equilibrium outcome of other mechanisms; this can be proven in a large category of mechanism design cases.
Relevant to a modelling (that is, theoretical) context with:
-- two players, usually firms
-- a third party (usually the government) managing a mechanism to achieve a desirable social outcome
-- incomplete information -- in particular, the players have types that are hidden from the other player and from the government.
Generally a direct revelation mechanism (that is, one in which the strategies are just the types a player can reveal about himself) in which telling the truth is a Nash equilibrium outcome can be proven to exist and be equivalent to any other mechanism available to the government. That is the revelation principle. It is used most often to prove something about the whole class of mechanism equilibria, by selecting the simple direct revelation mechanism, proving a result about that, and applying the revelation principle to assert that the result is true for all mechanisms in that context.
Classical result in the theory of auctions about the division of expected social
surplus among risk-neutral bidders and a risk-neutral bid-taker. Whenever the bidders have
independent private valuations for the resource in sale, all auction formats lead to the same
expected revenue to the bid-taker, and to the same expected profits of the bidders, which award
the object to the bidder that submits the highest bid - regardless of the specific payment rule
of the auction.
In particular, the equilibrium expected payments in the first price sealed bid auction
or the Dutch auction are the same as in the second price sealed bid auction,
in the English auction, or in any all pay auction. The revenue equivalence
theorem shows that in terms of the objective functions of risk neutral strategic traders which have
independent private information, all 'reasonable' auction formats are equivalent exchange
This equivalence extends to auctions of multiple identical goods if the bidder have unit demands.
It does not hold, however, in common value auctions, with risk-averse traders, or in
auction markets of multiple goods when the bidders bid for more than one item.
Reverse hindsight bias
People who are exposed to a very surprising or unexpected event may react by expressing an
I did not expect this to happen response. They attribute the surprise of the
unexpected event to their inability to have foreseen an outcome such as the one obtained
and recall predictions opposite to their judgement of the event after its occurrence.
In other words, the attempt to explain an unexpected event leads to an exaggerated
adjustment in a direction opposite to the hindsight bias.
that tax financing and bond financing of a given stream of government expenditures lead to equivalent allocations. This is the Modigliani-Miller theorem applied to the government.
A way of recoding variables in a data set so that one has a measure not of their absolute values but their positions in the distribution of observed values. Defined in this broadcast to the list of Stata users:
Date: Sat, 20 Feb 1999 14:13:35 +0000
From: Ronan Conroy
Subject: Re: statalist: Standardizing Variables
Paul Turner said (19/2/99 9:54 pm)
>I have two variables--X1 and X2--measured on ordinal scales. X1 ranges
>from 0 to 10; X2 ranges from 0 to 12. What I want to do is to standardize
>X1 and X2 to a common metric in order to explore how differences between
>the two affect the dependent variable of interest. Converting values to
>percentages of the maximum values (10 and 12) is the first approach that
>occurs to me, but I don't know if there's something I'm forgetting
This sort of thing is possible, and called ridit scoring. You replace
each of the original scale points with the percentage (or proportion) of
the sample who scored at or below that value. This gives the scales a
common interpretation as percentiles of the sample, and means that they
are now expressed on an interval metric, though the data are still grainy.
_/_/_/ _/_/ _/_/_/ _/ Ronan M Conroy
_/ _/ _/ _/ _/ _/ Lecturer in Biostatistics
_/_/_/ _/ _/_/_/ _/ Royal College of Surgeons
_/ _/ _/ _/ _/ Dublin 2, Ireland
_/ _/ _/_/ _/_/_/ _/ voice +353 1 402 2431
firstname.lastname@example.org fax +353 1 402 2329
I'm not an outlier; I just haven't found my distribution yet
A generalization of regular Riemann integration.
Let | denote the integral sign. Quoting from Priestly:
"...when we have two deterministic functions g(t),F(t), the Riemann-Stieltjes integral
R = |ab g(t)dF(t)
is defined as the limiting value of the discrete summation"
(sum from i=1 to i=n of) g(ti)[F(ti)-F(ti-1)]
for t1=a and tn=b as n goes to infinity and "as max(ti-ti-1)->0."
If F(t) is differentiable, then the above integral is the same as the regular integral R=|ab g(t)F'(t) dt, but the Reimann-Stieltjes integral can be defined in many cases even when F() is not differentiable.
One of the most common uses is when F() is a cdf.
Examples: The expectation of a random variable can be written:
mu=| xf(x) dx
if f(x) is the pdf. It can also be written:
mu=| x dF(x)
where F(x) is the cdf. The two are equivalent for a continuous distribution, but notice that for a discrete one (e.g. a coin flip, with X=0 for heads and X=1 for tails) the second, Riemann-Stieltjes, formulation is well defined but no pdf exists to calculate the first one.
If outcomes will occur with known or estimable probability the decisionmaker faces a risk. Certainty is a special case of risk in which this probability is equal to zero or one. Contrast uncertainty.
A decision maker?s risk attitude characterizes his willingness to engage in risky prospects.
Focusing on risky prospects with monetary outcomes, a decision maker displays
if and only if he strictly prefers a certain consequence to any risky prospect whose
mathematical expectation of consequences equals that certain amount.
Equivalently, a decision maker is said to be risk averse if and only if he
strictly refuses to participate in fair games (i.e. games with an
expected net outcome of zero). He is said to be a risk preferrer
if and only if he strictly prefers
the above mentioned risky prospect to its certain consequence.
He displays risk neutrality if and only if he is indifferent
between the risky prospect and the certain consequence.
Let u(x) denote a decision maker?s utility function on amounts of money.
Risk aversion, risk neutrality, and risk preference correspond to the
strict concavity, linearity, and strict convexity of u(x), respectively.
Let u(x) denote a decision maker?s utility function on amounts of money.
Risk aversion is equivalent to the strict concavity of u(x), implying
decreasing marginal utility of money.
For a risk averter the certainty equivalent of a risky prospect,
which is the amount of money for which the individual is indifferent
between the risky prospect and the certain amount, is strictly less than
the mathematical expectation of the outcomes of the risky prospect.
The degree of (absolute) risk aversion can be measured by means of the Arrow-Pratt
coefficient of risk aversion, which is suitable for both comparisons across
individuals and comparisons across wealth levels of a single decision maker.
Risk aversion of investors belongs to the crucial assumptions of numerous models
in finance theory (e.g., the Capital Asset Pricing Model, CAPM).
risk free rate puzzle
See equity premium puzzle.
Risk uncertainty and ambiguity
Many different definitions of risk, uncertainty, and ambiguity can be found
in the literature. This entry follows the notion commonly used in modern
decision theory, e.g. employed by Tversky and Kahneman (1992)
and much earlier proposed by Knight (1921).
Camerer and Weber (1992)
provide a review of various definitions and formalizations.
A decision is called risky when the
probabilities that certain states will occur in the future are precisely
known, e.g. in a fair roulette game. In contrast, a decision is called
uncertain when the probabilities are not precisely known. Examples are the
outcomes of sports events, elections or most real investments. Decisions
under risk can be seen as a special case of decisions under uncertainty with
precisely known probabilities. Risk and uncertainty can be distinguished by
the degree with which probabilities are known. In case of uncertainty,
probabilities are not precisely known but people can form more or less vage
beliefs about probabilities. If people are definitely not able to form any
beliefs about probabilities, this special case is termed complete ignorance.
The above notion of uncertainty corresponds to the widely used term
Bidder only cares about the expected monetary value,regardless of the level of uncertainty
An abbreviation for the RAND Journal of Economics, which was previously called the Bell Journal of Economics.
Stands for a standard VAR run on standard data, with interest rates (R), money stock (M), inflation (P), and output (Y). In Faust and Irons (1996), these are operationalized by the three-month Treasury bill rate, M2, the CPI, and the GNP.
U.S. legislation of 1936 which made rules against price discrimination by firms. Agitation by small grocers was a principal cause of the law. They were under competitive pressure and displaced by the arrival of chain stores. The Act is thought by many to have prevented reasonable price competition, since it made many pricing actions illegal per se. For many of its provisions, 'good faith' was not a permitted defense. So it can be argued that it was confusing, vague, unnecessarily restrictive, and designed to prevent some competitors in retailing from being driven out rather than to further social welfare generally, e.g. by allowing pricing decisions that would benefit consumers. Other causes: glitches in an earlier law, the Clayton Act.
A robust smoother is a smoother (an estimator of a regression function) that gives lower weights to datapoints that are outliers in the y-direction.
That the CAPM may appear to be rejected in tests not because it is wrong but because the proxies for the market return are not close enough to the true market portfolio available to investors.
A loss function that one might incorporate into an estimate of a function to prevent the estimated function from matching the data closely but at the cost of jerkiness. See 'spline smoothing' and 'cubic spline' for example uses.
An example roughness penalty would be LI[m"(u)]2du, where L is a 'smoothing parameter', I stands for the integral sign, m"() is the second derivative of the estimated function, and u is a dummy variable that ranges over the domain of the estimated function.
Paraphrasing from Hanson and Slaughter (1999): In the context of a Heckscher-Ohlin model of international trade, open trade between regions means changes in relative factor supplies between regions can lead to an adjustment in quantities and types of outputs between regions that would return the system toward equality of production input prices like wages across countries (the state of factor price equalization).
Such theorems are named this way by analogy to Rybczynski (1955), and refer to that part of the mechanism that has to do with output adjustments.