Notes for Mitchell and Hutchison (2008) AAE 575 Fall 2012

Section 4.4: Representing Risky Situations

Need three things for Risk Management

1)  Actions to choose

2)  Events (with probabilities)

3)  Outcomes (to the decision maker)

Represent in either Matrix or Decision Tree: equivalent (game theory proved)

Go through Table 4.1 to understand it: simple/hokey example

Action: Treat or do not treat

Event: Rain (p = 0.1) or does not rain (p=0.9)

Outcomes: four possible \$/ha depending on action and event

Which should a person choose? \$50 with treatment no matter what or not treat and possibly get \$100 or only \$10? Hokey example

Key point: Actions have to affect events/probabilities and/or outcomes.

Otherwise there is nothing to manage

Table 4.1: Action affects the outcomes

Could have action affect probabilities:

Action: put or don’t put in sweet corn patch

Event: raccoons attack/don’t attack sweet corn patch

Outcome: loss due to raccoons eating sweet corn

Affect outcomes = insurance. Affect probabilities = protection

[not a broad/widely used distinction, but some do use it]

Table 4.2: more realistic example [subjective probabilities]

Action: Use IPM or conventional system

Event: Three levels of pest pressure: low, moderate, high, with p = 0.7, 0.2, and 0.1.

Outcomes: six possible \$/ha net returns

Figure 4.1: Decision Tree for same example

Figure 4.2: plots the pdf and cdf of the outcomes (net returns) for the two actions

Which action/distribution of \$/ha outcomes do you choose?

First: how do we talk about the “risk” in outcomes/net returns?

·  Central tendency: Mean, median, mode (seen these already)

·  Dispersion

o  [symmetric]: variance/st. dev., CV, risk-return (Sharpe) ratio (seen these already)

o  [asymmetric]: Value at Risk (VaR): chose key probability and find outcome,

Probability of key outcome (break even or profit target)

o  Both need the cdf

Figure 4.4: Collect experimental data to “smooth” the pdf/cdf and replace subjective probabilities with objective probabilities.

Not change the overall details, just smoother now: how do we estimate a smooth pdf/cdf once have data like used for Figure 4.4?

Section 4.5: Decision Making Criteria and Tools

Define/describe various criteria to use to choose the “best” action = how to “optimize” when must make decision under risk.

1)  Not using probabilities (game theory strategies)

2)  Using probabilities (“risk management”)

No Probabilities

·  Maximin = minimax: choose action that gives best outcome for worst case scenarios: maximize the minimum (maximin) outcome or minimize the maximum loss (minimax)

·  Maximax: choose action with best outcome among the best case scenarios (impractical, but a useful overly optimistic base case for comparison)

·  Simple Average: ignore probabilities = give all outcome equal weight (1/n)

Table 4.1: maximin/minimax = Treat, Maximax and Simple Average = Do not treat

Table 4.2: all three = IPM

Table 4.3 (object. probs.): maximin/minimax, Simple Average = IPM, Maximax = Conventional

With Probabilities

·  Safety First Criteria (common in developing nation and human health contexts)

o  Minimize probability that returns less than zero (or other target)

o  Maximize Mean outcome, subject to constraint: mean outcome must exceed set level

·  Maximize Mean outcome: choose action that gives the highest mean outcome

o  Called “risk neutral”, ignores variability/dispersion

o  Mean = \$1000, action A with st dev of \$100 same as action B with st dev = \$500

o  Most people willing to trade off between mean and variability: give up some mean returns in trade for a lower variability (e.g., insurance)

Positive (Descriptive): How do people actually tradeoff between mean and variably (or asymmetry)? Try to find a theory that describes what people actually do.

Normative (Prescriptive): How should people tradeoff between mean and variably/asymmetry? Define rules/BMPs for managing risky situations (finance, EPA, FDA, etc.)

Risk Preferences: risk neutral, risk loving, risk averse

Simple choice:

a) random returns with mean m and variability/dispersion/spread ta

b) random returns with mean m and variability/dispersion/spread tb > ta

·  Risk neutral: indifferent between a and b

·  Risk averse: choose a: same mean, lower variability/dispersion/spread

·  Risk loving: choose b: same mean, more variability/dispersion/spread

Certainty Equivalent: Certain (non-random) return that makes the decision maker indifferent between the random choice and the certain outcome.

How much money would you need to be as well off as choosing the random payoff?

If the Mean Return = mp, then

If CE < mp, then risk averse

If CE = mp, then risk neutral

If CE > mp, then risk loving

Risk Premium: difference between mean return and CE return: RP = mp + CE, or CE + RP = mp

If RP > 0, then risk averse

If RP = 0, then risk neutral

If RP < 0, then risk loving

Preference/Utility Function: function that quantifies how people weight outcomes for risk

Standard Graphical Presentation

E[p]

Mean-Variance Utility:

Value of b defines the person’s risk preferences and degree of risk aversion

Decision Rule: choose action that gives the highest value for U(p)

Example: Suppose fertilizer rate affects both the mean and variance of corn yield, then the economic problem for choosing the optimal N rate assuming mean variance preferences is

Just need the equations for how mean and variance of yield are affected by N rate (estimate with field data), plus the risk preference parameter b (and prices and cost).

Problems:

1)  What risk preference parameter b to use?

2)  Are we sure farmers have mean-variance preferences?

Similar Alternative: Mean-St Dev Utility:

Note: remember that CE = Mean – RP, so technically, mean variance and mean st dev preferences are actually CE = Mean – RP. The U(p) is actually the CE and the RP is the second term, or .

Implementation of these preferences: Choose input x to maximize the CE

Expected Utility Theory

John von Neumann (and Morganstern): (main character in A Beautiful Mind)

People choose actions to maximize their expected utility, i.e., expected value of U(p)

Back to our example:

Question: what utility function to use?

Many types of utility functions proposed/used:

Constant Absolute Risk Aversion (CARA):

Constant Relative Risk Aversion (CRRA):

Decreasing Absolute Risk Aversion (DARA) Utility:

Exponential-Power Utility:

Lots of research has gone into “eliciting risk preferences”: Experiments and data collection with estimation to determine what utility function(s) and parameter values are most consistent with what people actually do?

Technical Issue: for these utility functions (which have desired theoretical properties) and profit distributions we commonly observe, no closed form function for E[U(p(x))] exists, so must use numerical methods to find the economically optimal x.

Note: are other theories of human decision making under risk: non-expected utility theory: probabilities inside the utility function, not linear as get for EU theory: rank dependent expected utility, ambiguity aversion, loss aversion/prospect theory, etc.

Here: we will use only von Neumann-Morganstern Expected Utility Hypothesis

Case Study: Cabbage IPM

Small plot research, IPM and conventional

·  Number of sprays and yield imply net returns (pi) based on cabbage price and spray cost

·  Convert frequencies to probabilities by dividing frequency by 24 to get pi

·  For each treatment

o  Calculate mean as , where i indexes outcomes

o  Calculate variance as

o  Calculate standard deviation as

o  Calculate utility for each outcome as

o  Calculate expected utility as

o  Calculate CE as CE = –ln(–EU)/r

o  Calculate CE for Mean-Variance Utility as

Derive CE using calculated EU for CARA utility by solving U(CE) = EU for CE

–exp(–rCE) = EU

exp(–rCE) = –EU

–rCE = ln(–EU)

CE = –ln(–EU)/r

Decision Criteria

Choose treatment (IPM or Conventional) with

a)  Greatest expected profit

b)  Greatest CE under mean-variance utility

c)  Greatest CE with CARA utility

Next Step

·  Use small plot data to estimate pdf of net returns for each case

·  Use simulation to estimate Ep and EU for IPM and Conventional

·  Same Decision Criteria

Monte Carlo Simulation

Suppose have variable x with pdf f(x) and want to know E[x] =

Suppose you cannot solve the integral

However, you can obtain many random draws from the pdf f(x)

Monte Carlo Approximation: E[x] , where xk is the kth random draw from the pdf f(x)

More Common: know x ~ f(x), but want to know E[g(x)] =

Monte Carlo Approximation: E[g(x)] , where xk is kth random draw from pdf f(x)

Example

·  Suppose have input x ~ lognormal with mean m and st dev s

·  Yield is a negative exponential function of this random input:

·  Net returns are

·  Utility is

What is expected yield, expected profit and expected utility?

1)  Draw “many” x’s from lognormal pdf with mean m and st dev s

2)  Calculate yield, net returns and utility for each draw

3)  Average of yield, net returns and utility are the Monte Carlo integral estimate of expected yield, expected profit and expected utility

Key: how do you obtain draws of x and how many to draw?

Apply the decision criteria: Set x at the value that maximizes Ep, CE, or …

What if multiple random variables?

a)  Uncorrelated

b)  Correlated