Microeconomics I c Leopold S¨ogner Department of Economics and Finance Institute for Advanced Studies Stumpergasse 56 1060 Wien Tel: +43-1-59991 182 soegner@ihs.ac.at http://www.ihs.ac.at/∼soegner September, 2011 Expected Utility Uncertainty (1) Micro I • Preferences and Lotteries. • Von Neumann-Morgenstern Expected Utility Theorem. • Attitudes towards risk. • State Dependent Utility, Subjective Utility MasColell Chapter 6. 1 Expected Utility Lotteries (1) Micro I • A risky alternative results in one of a number of different events or states of the world, ωi. • The events are associated with consequences or outcomes, zn. Each zn involves no uncertainty. • Outcomes can be money prices, wealth levels, consumption bundles, etc. • Assume that the set of outcomes is finite. Then Z = {z1, . . . , zN}. • E.g. flip a coin: Events {H, T} and outcomes Z = {−1, 1}, with head H or tail T. 2 Expected Utility Lotteries (2) Micro I • Definition - Simple Gamble/Simple Lottery: [D 6.B.1] With the consequences {z1, . . . , zN} ⊆ Z and N finite. A simple gamble assigns a probability pn to each outcome zn. pn ≥ 0 and N n=1 pn = 1. • Notation: L = (p1 ◦ z1, . . . , pN ◦ zN) or L = (p1, . . . , pN) • Let us fix the set of outcomes Z: Different lotteries correspond to a different set of probabilities. • Definition - Set of Simple Gambles: The set of simple gambles on Z is given by LS = {(p1◦z1, . . . , pN◦zN)|pn ≥ 0 , N n=1 pn = 1} = {L|pn ≥ 0 , N N=1 pn = 1} 3 Expected Utility Lotteries (3) Micro I • Definition - Degenerated Lottery: ˜Ln = (0 ◦ z1, . . . , 1 ◦ zn, . . . , 0 ◦ zN). • ’Z ⊆ LS’, since ˜Ln = (0 ◦ z1, . . . , 1 ◦ zn, . . . , 0 ◦ zN) for all i; • If z1 is the smallest element and zn the largest one, then also (α ◦ z1, 0 ◦ z2, . . . , 0 ◦ zN−1, (1 − α) ◦ zN) ∈ LS. • Remark: In terms of probability theory, the elements of Z where p > 0 provide the support of the distribution of a random variable z. I.e. a lottery L is a probability distribution. 4 Expected Utility Lotteries (4) Micro I • With N consequences, every simple lottery can be represented by a point in a N − 1 dimensional simplex ∆(N−1) = {p ∈ RN + | pn = 1} . • At each corner n we have the degenerated case that pn = 1. • With interior points pn > 0 for all i. • See Ritzberger, p. 36,37, Figures 2.1 and 2.2 or Figure 6.B.1, page 169. • Equivalent to Machina’s triangle; with N = 3; {(p1, p3) ∈ [0, 1]2 |0 ≤ 1 − p1 − p3 ≤ 1}. 5 Expected Utility Lotteries (5) Micro I • The consequences of a lottery need not be a z ∈ Z but can also be further lottery. • Definition - Compound Lottery:[D 6.B.2] Given K simple lotteries Lk and probabilities αk ≥ 0 and αk = 1, the compound lottery LC = (α1 ◦ L1, . . . , αk ◦ Lk, . . . , αK ◦ LK). It is the risky alternative that yields the simple lottery Lk with probability αk. • The support of the compound lottery is the union of the supports generating this lotteries. 6 Expected Utility Lotteries (6) Micro I • Definition - Reduced Lottery: For any compound lottery LC we can construct a reduced lottery/simple gamble L ∈ LS. With the probabilities pk for each Lk we get p = αkpk , such that probabilities for each zn ∈ Z are pn = αkpk n. • Examples: Example 2.5, Ritzberger p. 37 • A reduced lottery can be expressed by a convex combination of elements of compound lotteries (see Ritzberger, Figure 2.3, page 38). I.e. αpl1 + (1 − α)pl2 = plreduced. • Remark: This linear structure carries over to von Neumann-Morgenstern decision theory. 7 Expected Utility von Neumann-Morgenstern Utility (1) Micro I • Here we assume that any decision problem can be expressed by means of a lottery (simple gamble). • Only the outcomes matter. • Consumers are able to perform calculations like in probability theory, gambles with the same probability distribution on Z are equivalent. 8 Expected Utility von Neumann-Morgenstern Utility (2) Micro I • Axiom vNM1 - Completeness: For two gambles L1 and L2 in LS either L1 L2, L2 L1 or both. • Here we assume that a consumer is able to rank also risky alternatives. I.e. Axiom vNM1 is stronger than Axiom 1 under certainty. • Axiom vNM2 - Transitivity: For three gambles L1, L2 and L3: L1 L2 and L2 L3 implies L1 L3. 9 Expected Utility von Neumann-Morgenstern Utility (3) Micro I • Axiom vNM3 - Continuity: [D 6.B.3] The preference relation on the space of simple lotteries is continuous if for any L1, L2, L3 the sets {α ∈ [0, 1]|αL1 + (1 − α)L2 L3} ⊂ [0, 1] and {α ∈ [0, 1]|L3 αL1 + (1 − α)L2} ⊂ [0, 1] are closed. • Later we show: for any gambles L ∈ LS, there exists some probability α such that L ∼ α¯L + (1 − α)L . • This assumption rules out a lexicographical ordering of preferences (safety first preferences). • Small changes in the probabilities do not change the ordering of the lotteries. 10 Expected Utility von Neumann-Morgenstern Utility (4) Micro I • Consider the outcomes Z = {1000, 10, death}, where 1000 10 death. L1 gives 10 with certainty. • If vNM3 holds then L1 can be expressed by means of a linear combination of 1000 and death. If there is no α ∈ [0, 1] fulfilling this requirement vNM3 does not hold. • vNM3 will rule out Bernoulli utility levels of ±∞. 11 Expected Utility von Neumann-Morgenstern Utility (5) Micro I • Axiom - Monotonicity: For all probabilities α, β ∈ [0, 1], α¯L + (1 − α)L β ¯L + (1 − β)L if and only if α ≥ β. • Counterexample where this assumption is not met: Safari hunter who prefers an alternative with the bad outcome. 12 Expected Utility von Neumann-Morgenstern Utility (6) Micro I • Axiom vNM4 - Independence, Substitution: For all probabilities L1, L2 and L3 in LS and α ∈ [0, 1]: L1 L2 ⇔ αL1 + (1 − α)L3 αL2 + (1 − α)L3 . • This axiom implies that the preference orderings of the mixtures are independent of the third lottery. • This axiom has no parallel in consumer theory under certainty. 13 Expected Utility von Neumann-Morgenstern Utility (7) Micro I • Example: consider a bundle x1 consisting of 1 cake and 1 bottle of wine, x2 = (3, 0); x3 = (3, 3). Assume that x1 x2 . Axiom vNM4 requires that αx1 + (1 − α)x3 αx2 + (1 − α)x3 ; here α > 0. 14 Expected Utility von Neumann-Morgenstern Utility (8) Micro I • Lemma - vNM1-4 imply monotonicity: Moreover, if L1 L2 then αL1 + (1 − α)L2 βL1 + (1 − β)L2 for arbitrary α, β ∈ [0, 1] where α ≥ β. There is unique γ such that γL1 + (1 − γ)L2 ∼ L. • See steps 2-3 of the vNM existence proof. 15 Expected Utility von Neumann-Morgenstern Utility (9) Micro I • Definition - von Neumann Morgenstern Expect Utility Function: [D 6.B.5] A real valued function U : LS → R has expected utility form if there is an assignment of numbers (u1, . . . , uN) such that for every lottery L ∈ LS we have U(L) = zn∈Z p(zn)u(zn). A function of this structure is said to satisfy the expected utility property- it is called von Neumann-Morgenstern utility function. • Note that this function is linear in the probabilities pn. • u(zn) is called Bernoulli utility function. 16 Expected Utility von Neumann-Morgenstern Utility (10) Micro I • Proposition - Linearity of the von Neumann Morgenstern Expect Utility Function: [P 6.B.1] A utility function has expected utility form if and only if it is linear. That is to say: U K k=1 αkLk = K k=1 αkU(Lk) 17 Expected Utility von Neumann-Morgenstern Utility (11) Micro I Proof: • Suppose that U( K k=1 αkLk) = K k=1 αkU(Lk) holds. We have to show that U has expected utility form. • If U is linear then we can express any lottery L by means of a compound lottery with probabilities αn = pn and degenerated lotteries ˜Ln . I.e. L = pn ˜Ln . By linearity we get U(L) = U( pn ˜Ln ) = pnU(˜Ln ). • Define u(zn) = U(˜Ln ). Then U(L) = U( pn ˜Ln ) = pnU(˜Ln ) = pnu(zn). Therefore U(.) has expected utility form. 18 Expected Utility von Neumann-Morgenstern Utility (12) Micro I Proof: • Suppose that U(L) = N n=1 pnu(zn) holds. We have to show that utility is linear. • Consider a compound lottery (L1, . . . , LK, α1, . . . , αK). Its reduced lottery is L = αkLk. • Then U( k αkLk) = n k αkpk n u(zn) = k αk n u(zn)pk n = k αkU(Lk). 19 Expected Utility von Neumann-Morgenstern Utility (13) Micro I • Proposition - Existence of a von Neumann Morgenstern Expect Utility Function: [P 6.B.3] If the Axioms vNM 1-4 are satisfied for a preference ordering on LS. Then admits an expected utility representation. I.e. there exists a real valued function u(.) on Z which assigns a real number to each outcome. For any pair of lotteries we get L1 L2 ⇔ U(L1) = N n=1 pl1(zn)u(zn) ≥ U(L2) = N n=1 pl2(zn)u(zn) . 20 Expected Utility von Neumann-Morgenstern Utility (14) Micro I Proof: • Suppose that there is a best and a worst lottery. With a finite set of outcomes this can be easily shown by means of the independence axiom. In addition ¯L L. • By the definition of ¯L and L we get: ¯L Lc L, ¯L L1 L and ¯L L2 L. • We have to show that (i) u(zn) exists and (ii) that for any compound lottery Lc = βL1 + (1 − β)L2 we have U(βL1 + (1 − β)L2) = βU(L1) + (1 − β)U(L2) (expected utility structure). 21 Expected Utility von Neumann-Morgenstern Utility (15) Micro I Proof: • Step 1: By the independence Axiom vNM4 we get if L1 L2 and α ∈ (0, 1) then L1 αL1 + (1 − α)L2 L2. • This follows directly from the independence axiom. L1 ∼ αL1+(1−α)L1 αL1+(1−α)L2 αL2+(1−α)L2 = L2 22 Expected Utility von Neumann-Morgenstern Utility (16) Micro I Proof: • Step 2: Assume β > α , then (by monotonicity) β ¯L + (1 − β)L α¯L + (1 − α)L and vice versa. • Define γ = (β − α)/(1 − α); the assumptions imply γ ∈ [0, 1]. 23 Expected Utility von Neumann-Morgenstern Utility (17) Micro I Proof: • Then β ¯L + (1 − β)L = γ ¯L + (1 − γ)(α¯L + (1 − α)L) γ(α¯L + (1 − α)L) + (1 − γ)(α¯L + (1 − α)L) ∼ α¯L + (1 − α)L 24 Expected Utility von Neumann-Morgenstern Utility (18) Micro I Proof: • Step 2: For the converse we have to show that β ¯L + (1 − β)L α¯L + (1 − α)L results in β > α. We show this by means of the contrapositive: If β > α then β ¯L + (1 − β)L α¯L + (1 − α)L. • Thus assume β ≤ α, then α¯L + (1 − α)L β ¯L + (1 − β)L follows in the same way as above. If α = β indifference follows. 25 Expected Utility von Neumann-Morgenstern Utility (19) Micro I Proof: • Step 3: There is a unique αL such that L ∼ αL ¯L + (1 − αL)L. • Existence follows from ¯L L and the continuity axiom. Uniqueness follows from step 2. • Ad existence: define the sets {α ∈ [0, 1]|α¯L + (1 − α)L L} and {α ∈ [0, 1]|L α¯L + (1 − α)L}. Both sets are closed. Any α belongs to at least one of these two sets. Both sets are nonempty. Their complements are open and disjoint. The set [0, 1] is connected ⇒ there is at least one α belonging to both sets. 26 Expected Utility Connected Sets Micro I • Definition: Let X be a topological space. A separation of X is a pair U, V of disjoint nonempty open subsets of X whose union is X. The space is said to be connected, if there does not exist a separation of X. (see e.g. Munkres, J. Topology, page 148) • Example: The rationals are not connected. • Example: [−1, 1] is connected, [−1, 0] and (0, 1] are disjoint and cover X. The first set is not open. Alternatively, if X = [−1, 0) ∪ (0, 1] we would get a separation. 27 Expected Utility von Neumann-Morgenstern Utility (20) Micro I Proof: • Step 4: The function U(L) = αL represents the preference relations . • Consider L1, L2 ∈ LS: If L1 L2 then α1 ≥ α2. If α1 ≥ α2 then L1 L2 by steps 2-3. • It remains to show that this utility function has expected utility form. 28 Expected Utility von Neumann-Morgenstern Utility (21) Micro I Proof: • Step 5: U(L) is has expected utility form. • We show that the linear structure also holds for the compound lottery Lc = βL1 + (1 − β)L2. • By using the independence we get: βL1 + (1 − β)L2 ∼ β(α1 ¯L + (1 − α1)L) + (1 − β)L2 ∼ β(α1 ¯L + (1 − α1)L) + (1 − β)(α2 ¯L + (1 − α2)L) ∼ (βα1 + (1 − β)α2)¯L + (β(1 − α1) + (1 − β)(1 − α2))L • By the rule developed in step 4, this shows that U(Lc) = U(βL1 + (1 − β)L2) = βU(L1) + (1 − β)U(L2). 29 Expected Utility von Neumann-Morgenstern Utility (22) Micro I • Proposition - von Neumann Morgenstern Expect Utility Function are unique up to Positive Affine Transformations: [P 6.B.2] If U(.) represents the preference ordering , then V represents the same preference ordering if and only if V = α + βU, where β > 0. 30 Expected Utility von Neumann-Morgenstern Utility (23) Micro I Proof: • Note that if V (L) = α + βU(L), V (L) fulfills the expected utility property. • We have to show that if U and V represent preferences, then V has to be an affine linear transformation of U. • If U is constant on LS, then V has to be constant. Both functions can only differ by a constant α. 31 Expected Utility von Neumann-Morgenstern Utility (24) Micro I Proof: • Alternatively, for any L ∈ LS and ¯L L, we get f1 := U(L) − U(L) U(¯L) − U(L) and f2 := V (L) − V (L) V (¯L) − V (L) . • f1 and f2 are linear transformations of U and V that satisfy the expected utility property. • fi(L) = 0 and fi(¯L) = 1, for i = 1, 2. 32 Expected Utility von Neumann-Morgenstern Utility (25) Micro I Proof: • L ∼ L then f1 = f2 = 0; if L ∼ ¯L then f1 = f2 = 1. • By expected utility U(L) = γU(¯L) + (1 − γ)U(L) and V (L) = γV (¯L) + (1 − γ)V (L). • If ¯L L L then there has to exist a unique γ, such that L L ∼ γ ¯L + (1 − γ)L ¯L. Therefore γ = U(L) − U(L) U(¯L) − U(L) = V (L) − V (L) V (¯L) − V (L) 33 Expected Utility von Neumann-Morgenstern Utility (26) Micro I Proof: • Then V (L) = α + βU(L) where α = V (L) − U(L) V (¯L) − V (L) U(¯L) − U(L) and β = V (¯L) − V (L) U(¯L) − U(L) . 34 Expected Utility von Neumann-Morgenstern Utility (27) Micro I • The idea of expected utility can be extended to a set of distributions F(x) where the expectation of u(x) exists, i.e. u(x)dF(x) < ∞. • For technical details see e.g. Robert (1994), The Bayesian Choice and DeGroot, Optimal Statistical Decisions. • Note that expected utility is a probability weighted combination of Bernoulli utility functions. I.e. the properties of the random variable z, described by the lottery l(z), are separated from the attitudes towards risk. 35 Expected Utility VNM Indifference Curves (1) Micro I • Indifferences curves are straight lines; see Ritzberger, Figure 2.4, page 41. • Consider a VNM utility function and two indifferent lotteries L1 and L2. It has to hold that U(L1) = U(L2). • By the expected utility theorem U(αL1 + (1 − α)L2) = αU(L1) + (1 − α)U(L2). • If U(L1) = U(L2) then U(αL1 + (1 − α)L2) = U(L1) = U(L2) has to hold and the indifferent lotteries is linear combinations of L1 and L2. 36 Expected Utility VNM Indifference Curves (2) Micro I • Indifference curves are parallel; see Ritzberger, Figure 2.5, 2.6, page 42. • Consider L1 ∼ L2 and a further lottery L3 L1 (w.l.g.). • From βL1 + (1 − β)L3 and βL2 + (1 − β)L3 we have received two compound lotteries. • By construction these lotteries are on a line parallel to the line connecting L1 and L2. 37 Expected Utility VNM Indifference Curves (3) Micro I • The independence axiom vNM4 implies that βL1 + (1 − β)L3 ∼ βL2 + (1 − β)L3 for β ∈ [0, 1]. • Therefore the line connecting the points βL1 + (1 − β)L3 and βL2 + (1 − β)L3 is an indifference curve. • The new indifference curve is a parallel shift of the old curve; by the linear structure of the expected utility function no other indifference curves are possible. 38 Expected Utility Allais Paradoxon (1) Micro I Lottery 0 1-10 11-99 pz 1/100 10/100 89/100 La 50 50 50 Lb 0 250 50 Ma 50 50 0 Mb 0 250 0 39 Expected Utility Allais Paradoxon (2) Micro I • Most people prefer La to Lb and Mb to Ma. • This is a contradiction to the independence axiom G5. • Allais paradoxon in the Machina triangle, Gollier, Figure 1.2, page 8. 40 Expected Utility Allais Paradoxon (3) Micro I • Expected utility theory avoids problems of time inconsistency. • Agents violating the independence axiom are subject to Dutch book outcomes (violate no money pump assumption). 41 Expected Utility Allais Paradoxon (4) Micro I • Three lotteries: La Lb and La Lc. • But Ld = 0.5Lb + 0.5Lc La. • Gambler is willing to pay some fee to replace La by Ld. 42 Expected Utility Allais Paradoxon (5) Micro I • After nature moves: Lb or Lc with Ld. • Now the agents is once again willing to pay a positive amount for receiving La • Gambler starting with La and holding at the end La has paid two fees! • Dynamically inconsistent/Time inconsistent. • Dicuss Figure 1.3, Gollier, page 12. 43 Expected Utility Risk Attitude (1) Micro I • For the proof of the VNM-utility function we did not place any assumptions on the Bernoulli utility function u(z). • For applications often a Bernoulli utility function has to be specified. • In the following we consider z ∈ RN and u (z) > 0; abbreviate lotteries with money amounts l ∈ LS. • There are interesting interdependences between the Bernoulli utility function and an agent’s attitude towards risk. 44 Expected Utility Risk Attitude (2) Micro I • Consider a nondegenerated lottery l ∈ LS and a degenerated lottery ˜l. Assume that E(z) = z˜l holds. I.e. the degenerated lottery pays the expectation of l for sure. • Definition - Risk Aversion: A consumer is risk averse if ˜l is at least of good as l; ˜l is preferred to l in a stronger version. • Definition - Risk Neutrality: A consumer is risk neutral if ˜l ∼ l. • Definition - Risk Loving: A consumer is risk loving if l is at least as good as ˜l. 45 Expected Utility Risk Attitude (3) Micro I • By the definition of risk aversion we see that u(E(z)) ≥ E(u(z)). • To attain such a relationship Jensen’s inequality has to hold: If f(z) is a concave function and z ∼ F(z) then f(z)dF(z) ≤ f( zdF(z)) . • For sums this implies: pzf(z) ≤ f( pzz) . For strictly concave function, < has to hold, for convex functions we get ≥; for strictly convex functions >. 46 Expected Utility Risk Attitude (4) Micro I • For a lottery l where E(u(z)) < ∞ and E(z) < ∞ we can calculate the amount C where a consumer is indifferent between receiving C for sure and the lottery l. I.e. l ∼ C and E(u(z)) = u(C) hold. • In addition we are able to calculate the maximum amount π an agent is willing to pay for receiving the fixed amount E(z) for sure instead of the lottery l. I.e. l ∼ E(z) − π or E(u(z)) = u(E(z) − π). 47 Expected Utility Risk Attitude (5) Micro I • Definition - Certainty Equivalent [D 6.C.2]: The fixed amount C where a consumer is indifferent between C an a gamble l is called certainty equivalent. • Definition - Risk Premium: The maximum amount π a consumer is willing to pay to exchange the gamble l for a sure event with outcome E(z) is called risk premium. • Note that C and π depend on the properties of the random variable (described by l) and the attitude towards risk (described by u). 48 Expected Utility Risk Attitude (6) Micro I • Remark: the same analysis can also be performed with risk neutral and risk loving agents. • Remark: MWG defines a probability premium, which is abbreviated by π in the textbook. Given a degenerated lottery and some ε > 0. The probability-premium πR is defined as u(˜lz) = (1 2 + πR )u(z + ε) + (1 2 − πR )u(z − ε). I.e. mean-preserving spreads are considered here. 49 Expected Utility Risk Attitude (7) Micro I • Proposition - Risk Aversion and Bernoulli Utility: Consider an expected utility maximizer with Bernoulli utility function u(.). The following statements are equivalent: – The agent is risk averse. – u(.) is a (strictly) concave function. – C ≤ E(z). (< with strict version) – π ≥ 0. (> with strict version) 50 Expected Utility Risk Attitude (8) Micro I Proof: (sketch) • By the definition of risk aversion: for a lottery l where E(z) = z˜l, a risk avers agent ˜l l. • I.e. E(u(z)) ≤ u(z˜l) = u(E(z)) for a VNM utility maximizer. • (ii) follows from Jensen’s inequality. • (iii) If u(.) is (strictly) concave then E(u(z)) = u(C) ≤ u(E(z)) can only be matched with C ≤ E(z). • (iv) With a strictly concave u(.), E(u(z)) = u(E(z) − π) ≤ u(E(z)) can only be matched with π ≥ 0. 51 Expected Utility Arrow Pratt Coefficients (1) Micro I • Using simply the second derivative u (z) causes problems with affine linear transformations. • Definition - Arrow-Pratt Coefficient of Absolute Risk Aversion: [D 6.C.3] Given a twice differentiable Bernoulli utility function u(.), the coefficient of absolute risk aversion is defined by A(z) = −u (z)/u (z). • Definition - Arrow-Pratt Coefficient of Relative Risk Aversion: [D 6.C.5] Given a twice differentiable Bernoulli utility function u(.), the coefficient of relative risk aversion is defined by R(z) = −zu (z)/u (z). 52 Expected Utility Comparative Analysis (1) Micro I • Consider two agents with Bernoulli utility functions u1 and u2. We want to compare their attitudes towards risk. • Definition - More Risk Averse: Agent 1 is more risk averse than agent 2, if agent 1 dislikes all lotteries that agent 2 dislikes. • Define a function φ(x) = u1(u−1 2 (x)). Since u2(.) is an increasing function this expression is well defined. We, in addition, assume that the first and the second derivatives exist. • By construction with x = u2(z) we get: φ(x) = u1(u−1 2 (x)) = u1(u−1 2 (u2(z))) = u1(z). I.e. φ(x) transforms u2 into u1, such that u1(z) = φ(u2(z)). 53 Expected Utility Comparative Analysis (2) Micro I • Proposition - More Risk Averse Agents [P 6.C.3]: Assume that the first and second derivatives of the Bernoulli utility functions u1 and u2 exist (u > 0 and u < 0). Then the following statements are equivalent: – Agent 1 is (strictly) more risk averse than agent 2. – u1 is a (strictly) concave transformation of u2. – A1(z) ≥ A2(z) (> for strict) for all z. – C1 ≤ C2 and π1 ≥ π2; (<> for strict). 54 Expected Utility Comparative Analysis (3) Micro I Proof: • Consider a random variable z described by l and the function φ. Consumer 2 is risk averse. • Step 1 (ii)∼ (i): By means of Jensen’s inequality we get a concave φ(); (with strict concave we get <) E(u1(z)) = E(φ(u2(z)) ≤ φ(E(u2(z))) ≤ φ(u2(E(z))) = u1(E(z)) First ≤: φ has to be concave to apply Jensen 55 Expected Utility Comparative Analysis (4) Micro I Proof: Second ≤: u2 has to be concave, since consumer 2 is risk averse. • Therefore, if agent one is more risk averse, then u1 has to be (strictly) concave transformation of u2. • The above considerations work in both directions, therefore (i) and (ii) are equivalent. 56 Expected Utility Comparative Analysis (5) Micro I Proof: • Step 2 (iii)∼ (ii): By the definition of φ and our assumptions we get u1(z) = dφ((u2(z))) dz = φ (u2(z))u2(z) . (since u1, u2 > 0 ⇒ φ > 0) and u1(z) = φ (u2(z))u2(z) + φ (u2(z))(u2(z))2 . 57 Expected Utility Comparative Analysis (6) Micro I Proof: • Divide both sides by −u1(z) < 0 and using u1(z) = ... yields: − u1(z) u1(z) = A1(z) = A2(z) − φ (u2(z)) φ (u2(z)) u2(z) . • Since A1, A2 > 0 due to risk aversion, φ > 0 and φ ≤ 0 (<) due to its concave shape we get A1(z) ≥ A2(z) (>) for all z. 58 Expected Utility Comparative Analysis (7) Micro I Proof: • Step 3 (vi)∼ (ii): Jensen’s inequality yields (with strictly concave φ) u1(C1) = E(u1(z)) = E(φ(u2(z)) < φ(E(u2(z))) = φ(u2(C2)) = u1(C2) • Since u1 > 0 we get C1 < C2. • π1 > π2 works in the same way. • The above considerations also work in both directions, therefore (ii) and (iv) are equivalent. 59 Expected Utility Comparative Analysis (8) Micro I Proof: • Step 4 (vi)∼ (ii): Jensen’s inequality yields (with strictly concave φ) u1(E(z)−π1) = E(u1(z)) = E(φ(u2(z)) < φ(E(u2(z))) = φ(u2(E(z)−π2)) = u1(E(z)−π2) • Since u1 > 0 we get π1 > π2. 60 Expected Utility Stochastic Dominance (1) Micro I • In an application, do we have to specify the Bernoulli utility function? • Are there some lotteries (distributions) such that F(z) is (strictly) preferred to G(z)? • E.g. if X(ω) > Y (ω) a.s.? • YES ⇒ Concept of stochastic dominance. • Mascollel, Figure 6.D.1., page 196. 61 Expected Utility Stochastic Dominance (2) Micro I • Definition - First Order Stochastic Dominance: [D 6.D.1] A distribution F(z) first order dominates the distribution G(z) if for every nondecreasing function u : R → R we have ∞ −∞ u(z)dF(z) ≥ ∞ −∞ u(z)dG(z). • Definition - Second Order Stochastic Dominance: [D 6.D.2] A distribution F(z) second order dominates the distribution G(z) if EF (z) = EG(z) and for every nondecreasing concave function u : R+ → R the inequality ∞ 0 u(z)dF(z) ≥ ∞ 0 u(z)dG(z) holds. 62 Expected Utility Stochastic Dominance (3) Micro I • Proposition - First Order Stochastic Dominance: [P 6.D.1] F(z) first order dominates the distribution G(z) if and only if F(z) ≤ G(z). • Proposition - Second Order Stochastic Dominance: [D 6.D.2] F(z) second order dominates the distribution G(z) if and only if ¯z 0 F(z)dz ≤ ¯z 0 G(z)dz for all ¯z in R+ . • Remark: I.e. if we can show stochastic dominance we do not have to specify any Bernoulli utility function! 63 Expected Utility Stochastic Dominance (4) Micro I Proof: • Assume that u is differentiable and u ≥ 0 • Step 1: First order, if part: If F(z) ≤ G(z) integration by parts yields: ∞ −∞ u(z)dF (z) − ∞ −∞ u(z)dG(z) = u(z)(F (z) − G(z))| ∞ −∞ − ∞ −∞ u (z)(F (z) − G(z))dz = − ∞ −∞ u (z)(F (z) − G(z))dz ≥ 0 . • The above inequality holds since the terms inside the integral (F(z) − G(z)) ≤ 0 a.s.. 64 Expected Utility Stochastic Dominance (5) Micro I Proof: • Step 2: First order, only if part: If FOSD then F(z) ≤ G(z) holds. Proof by means of contradiction. • Assume there is a ¯z such that F(¯z) > G(¯z). ¯z > −∞ by construction. Set u(z) = 0 for z ≤ ¯z and u(z) = 1 for z > ¯z. Here we get ∞ −∞ u(z)dF(z) − ∞ −∞ u(z)dG(z) = (1 − F(¯z)) − (1 − G(¯z)) = −F(¯z) + G(¯z) < 0 65 Expected Utility Stochastic Dominance (6) Micro I Proof: • Second Order SD: Assume that u is twice continuously differentiable, such that u (z) ≤ 0, w.l.g. u(0) = 0. • Remark: The equality of means implies: 0 = ∞ 0 zdF(z) − ∞ 0 zdG(z) = z(F(z) − G(z))|∞ 0 − ∞ 0 (F(z) − G(z))dz = − ∞ 0 (F(z) − G(z))dz . 66 Expected Utility Stochastic Dominance (7) Micro I Proof: • Step 3: Second order, if part: Integration by parts yields: ∞ 0 u(z)dF (z) − ∞ 0 u(z)dG(z) = u(z)(F (z) − G(z))| ∞ 0 − ∞ 0 u (z)(F (z) − G(z))dz = − ∞ 0 u (z)(F (z) − G(z))dz = −u (z) z 0 (F (x) − G(x))dx| ∞ 0 − ∞ 0 −u (z) z 0 (F (x) − G(x))dx dz = ∞ 0 u (z) z 0 (F (x) − G(x))dx dz ≥ 0 • Note that u ≤ 0 by assumption. 67 Expected Utility Stochastic Dominance (8) Micro I Proof: • Step 4: Second order, only if part: Consider a ¯z such that u(z) = ¯z for all z > ¯z and u(z) = z for all z ≤ ¯z. This yields: ∞ 0 u(z)dF(z) − ∞ 0 u(z)dG(z) = ¯z 0 zdF(z) − ¯z 0 zdG(z) + ¯z ((1 − F(¯z)) − (1 − G(¯z))) = z (F(z) − G(z)) |¯z 0 − ¯z 0 (F(z) − G(z)) dz − ¯z (F(¯z) − G(¯z)) = − ¯z 0 (F(z) − G(z)) dz < 0 . 68 Expected Utility Stochastic Dominance (9) Micro I • Definiton - Monotone Likelihood Ratio Property: The distributions F(z) and G(z) fulfill, the monotone likelihood rate property if G(z)/F(z) is non-increasing in z. • For x → ∞ G(z)/F(z) = 1 has to hold. This and the fact that G(z)/F(z) is non-increasing implies G(z)/F(z) ≥ 1 for all z. • Proposition - First Order Stochastic Dominance follows from MLP: MLP results in F(z) ≤ G(z). • Remark: If F(z) and G(z) have Lebesgue-densities f(z) and g(z), then F(z) ≤ G(z) if the ratio of the densities g(z)/f(z) is non-increasing. 69 Expected Utility Arrow-Pratt Approximation (1) Micro I • By means of the Arrow-Pratt approximation we can express the risk premium π in terms of the Arrow-Pratt measures of risk. • Assume that z = w + kx, where w is a fixed constant (e.g. wealth), x is a mean zero random variable and k ≥ 0. By this assumption the variance of z is given by V (z) = k2 V (x) = k2 E(x2 ). • Proposition - Arrow-Pratt Risk Premium with respect to Additive risk: If risk is additive, i.e. z = w + kx, then the risk premium π is approximately equal to 0.5A(w)V (z). 70 Expected Utility Arrow-Pratt Approximation (2) Micro I Proof: • By the definition of the risk premium we have E(u(z)) = E(u(w + kx)) = u(w − π(k)). • For k = 0 we get π(k) = 0. For risk averse agents dπ(k)/dk ≥ 0. • Use the definition of the risk premium and take the first derivate with respect to k on both sides: E(xu (w + kx)) = −π (k)u (w − π(k)) . 71 Expected Utility Arrow-Pratt Approximation (3) Micro I Proof: • For the left hand side we get at k = 0: E(xu (w + kx)) = u (w)E(x) = 0 since E(x) = 0 by assumption. • Matching LHS with RHS results in π (k) = 0 at k = 0. 72 Expected Utility Arrow-Pratt Approximation (4) Micro I Proof: • Taking the second derivative with respect to k yields: E(x2 u (w + kx)) = (π (k))2 u (w − π(k)) − π (k)u (w − π(k)) • At k = 0 this results in (note that π (0) = 0): π (0) = − u (w) u (w) E(x2 ) 73 Expected Utility Arrow-Pratt Approximation (5) Micro I • A second order Taylor expansion of π(k) around k = 0 results in π(k) ≈ π(0) + π (0)k + π (0) 2 k2 • Thus π(k) ≈ 0.5A(w)E(x2 )k2 • Since E(x) = 0 by assumption, the risk premium is proportional to the variance of x. 74 Expected Utility Arrow-Pratt Approximation (6) Micro I • For multiplicative risk we can proceed as follows: z = w(1 + kx) where E(x) = 0. • Proceeding the same way results in: π(k) w ≈ − wu (w) u (w) k2 E(x2 ) = 0.5R(w)E(x2 )k2 • Proposition - Arrow-Pratt Relative Risk Premium with respect to Multiplicative risk: If risk is multiplicative, i.e. z = w(1 + kx), then the relative risk premium π/w is approximately equal to 0.5R(w)k2 V (x). • Interpretation: Risk premium per monetary unit of wealth. 75 Expected Utility Decreasing Absolute Risk Aversion (1) Micro I • It is widely believed that the more wealthy an agent, the smaller his/her willingness to pay to escape a given additive risk. • Definition - Decreasing Absolute Risk Aversion: Given additive risk z = w + x, x is a random variable with mean 0. The risk premium is a decreasing function in wealth w. 76 Expected Utility Decreasing Absolute Risk Aversion (2) Micro I • Proposition - Decreasing Absolute Risk Aversion: [P 6.C.3] The following statements are equivalent – The risk premium is a decreasing function in wealth w. – Absolute risk aversion A(w) is decreasing in wealth. – −u (z) is a concave transformation of u. I.e. u is sufficiently convex. 77 Expected Utility Decreasing Absolute Risk Aversion (3) Micro I Proof: (sketch) • Step 1, (i) ∼ (iii): Consider additive risk and the definition of the risk premium. Treat π as a function of wealth: E(u(w + kx)) = u(w − π(w)) . • Taking the first derivative yields: E(1u (w + kx)) = (1 − π (w))u (w − π(w)) . 78 Expected Utility Decreasing Absolute Risk Aversion (4) Micro I Proof: (sketch) • This yields: π (w) = − E(1u (w + kx)) − u (w − π(w)) u (w − π(w)) . • π (w) decreases if E(1u (w + kx)) − u (w − π(w)) ≥ 0. • Note that we have proven that if E(u2(z)) = u2(z − π2) then E(u1(z)) ≤ u1(z − π2) if agent 1 were more risk averse. 79 Expected Utility Decreasing Absolute Risk Aversion (5) Micro I Proof: (sketch) • Here we have the same mathematical structure (see slides on Comparative Analysis): set z = w + kx, u1 = −u and u2 = u. • ⇒ −u is more concave than u such that −u is a concave transformation of u. 80 Expected Utility Decreasing Absolute Risk Aversion (6) Micro I Proof: (sketch) • Step 2, (iii) ∼ (ii): Next define P(w) := −u u which is often called degree of absolute prudence. • From our former theorems we get: P(w) ≥ A(w) has to be fulfilled (see A1 and A2). • Take the first derivative of the Arrow-Pratt measure yields: A (w) = − 1 (u (w))2 (u (w)u (w) − (u (w)) 2 ) = − u (w) (u (w)) (u (w)/u (w) − u (w)/u (w)) = u (w) (u (w)) (P (w) − A(w)) 81 Expected Utility Decreasing Absolute Risk Aversion (7) Micro I Proof: (sketch) • A decreases in wealth if A (w) ≤ 0. • We get A (w) ≤ 0 if P(w) ≥ A(w). 82 Expected Utility HARA Utility (1) Micro I • Definition - Harmonic Absolute Risk Aversion: A Bernoulli utility function exhibits HARA if its absolute risk tolerance (= inverse of absolute risk aversion) T(z) := 1/A(z) is linear in wealth w. • I.e. T(z) = −u (z)/u (z) is linear in z • These functions have the form u(z) = ζ (η + z/γ) 1−γ . • Given the domain of z, η + z/γ > 0 has to hold. 83 Expected Utility HARA Utility (2) Micro I • Taking derivatives results in: u (z) = ζ 1 − γ γ (η + z/γ) −γ u (z) = −ζ 1 − γ γ (η + z/γ) −γ−1 u (z) = ζ (1 − γ)(γ + 1) γ2 (η + z/γ) −γ−2 84 Expected Utility HARA Utility (3) Micro I • Risk aversion: A(z) = (η + z/γ) −1 • Risk Tolerance is linear in z: T(z) = η + z/γ • Absolute Prudence: P(z) = γ+1 γ (η + z/γ) −1 • Relative Risk Aversion: R(z) = z (η + z/γ) −1 85 Expected Utility HARA Utility (4) Micro I • With η = 0, R(z) = γ: Constant Relative Risk Aversion Utility Function: u(z) = log(z) for γ = 1 and u(z) = z1−γ 1−γ for γ = 1. • This function exhibits DARA; A (z) = −γ2 /z2 < 0. 86 Expected Utility HARA Utility (5) Micro I • With γ → ∞: Constant Absolute Risk Aversion Utility Function: A(z) = 1/η. • Since u (z) = Au (z) we get u(z) = − exp(−Az)/A. • This function exhibits increasing relative risk aversion. 87 Expected Utility HARA Utility (6) Micro I • With γ = −1: Quadratic Utility Function: • This functions requires z < η, since it is decreasing over η. • Increasing absolute risk aversion. 88 Expected Utility State Dependent Utility (1) Micro I • With von Neumann Morgenstern utility theory only the consequences and their corresponding probabilities matter. • I.e. the underlying cause of the consequence does not play any role. • If the cause is one’s state of health this assumption is unlikely to be fulfilled. • Example car insurance: Consider fair full cover insurance. Under VNM utility U(l) = pu(w − P) + (1 − p)u(w − P), etc. If however it plays a role whether we have a wealth of w − P in the case of no accident or getting compensated by the insurance company such the wealth is w − P, the agent’s preferences depend on the states accident and no accident. 89 Expected Utility State Dependent Utility (2) Micro I • Definition - States: Events ω ∈ Ω causing the consequences z ∈ Z are called states of the world/states of nature. Ω is called set of states (sample space). • For these states we assume that they – Leave no relevant aspect undescribed. – Mutually exclusive. At most one state can be obtained. – Collectively exhaustive, ω = Ω. – ω does not depend on the choice of the decision maker. 90 Expected Utility State Dependent Utility (3) Micro I • Definition - Uncertainty with State Dependent Utility: To formulate uncertainty consider the following parts: – Set of consequences Z. – Set of states Ω. – Probability measure π on (Ω, F). 91 Expected Utility State Dependent Utility (4) Micro I • Remark: Note that this construction corresponds to the idea of a random variable. • A function g : Ω → Z will be called random variable. With the sigma field F generated by this random variable we get the probability measure π. An event is a subset of Ω. If Z ⊆ RN it is a real valued random variable. • A random variable assigns to each state ω a consequence z ∈ Z, the preimage is g−1 (z) = ω. 92 Expected Utility States (1) Micro I • A random variable f mapping from the set of states into consequences gives rise to a lottery (π1 ◦ z1, . . . , πn ◦ zn) for finite Ω. • There is a loss of information when going from the random variable to the lottery/distribution representation. We do not know which state gave rise to a particular consequence. 93 Expected Utility States (2) Micro I • A random variable z is called measurable if f−1 (z) = ω ∈ F. I.e. the preimage has to be contained in the sigma field. • With finitely many states we can define the set P = {f−1 (¯z)}¯z=z∈Z with f−1 (¯z) := {ω ∈ Ω|f(ω) = ¯z}. By construction P is a partition. • If f−1 (¯z1) ∩ f−1 (¯z2) = ∅ then z1 = z2 , i f−1 (zi) = Ω f−1 (zi) = ∅ by construction. • Within f−1 (¯z1) the function f(ω) is constant. f(ω) = ¯z1 for ω ∈ f−1 (¯z1). 94 Expected Utility States (3) Micro I • Example - Asset Price: Assume the price of an asset is permitted to move upwards (by 1 + ut) for downwards (1 − dt) with probability p and 1 − p. The initial price S0 = 1. We consider two periods. To keep the analysis simple assume that (1 + u1)(1 + d2) = (1 + d1)(1 + u2). • Then ω1 correspond to the consequence (1 + u1)(1 + u2), ω2 to (1 + u1)(1 − d2), ω3 to (1 − d1)(1 + u2) and ω4 to (1 − d1)(1 − d2). The sigma field generated by this random variable consists of all subsets of Ω. 95 Expected Utility States (4) Micro I • At t = 2 the partition P2 is given by the sets ω1, . . . , ω4. For each consequence the preimage f−1 (zi) ∈ F or P2. • At t = 1 only the subsets (ω1, ω2) and (ω3, ω4) are measurable with respect to F1. For t = 0 only the constant S0 is measurable with respect to the trivial sigma field F0 = {∅, Ω}. • P1 = {(ω1, ω2), (ω3, ω4)}. 96 Expected Utility States (5) Micro I • I.e. we get the filtration F0 ⊆ F1 ⊆ F2. • The corresponding partitions are P0 and P1. P2 is finer than P1 and P1 is finer than P0. 97 Expected Utility States (6) Micro I • The corresponding partitions are P0 and P1. P2 is finer than P1 and P1 is finer than P0. • The subsets of P2 are f−1 2 (zi) = ωi, i = 1, . . . , 4. For P1 we get the subset f−1 1 (¯zi) = (ω1, ω2) for i = 1, 2 and f−1 1 (¯zi) = (ω3, ω4) for i = 3, 4 . While for P0 we get Ω. • Note that f−1 2 (¯zi) ⊆ f−1 1 (¯zi) but not vice versa. 98 Expected Utility States (7) Micro I • Example - Signals: Assume that a random variable f maps from Ω to a set of reports/signal R, r are the elements of R. • Hf is the partition generated by f−1 (r), i.e. Hf = {f−1 (¯r)}r∈R. • For two random variables f and g, the events f−1 (¯r1) ∩ g−1 (¯r2) = {ω ∈ Ω|f(ω) = ¯r1 and g(ω) = ¯r2} also partition the state space. • If for every r1 it happens that f−1 (¯r1) ⊆ g−1 (¯r2) for some ¯r2, then the addition of g does not result in further information. 99 Expected Utility States (8) Micro I • Definition - Information Partition: A partition on the state space Ω is called information partition, the subsets of this partition are h. For every state ω ∈ Ω: The event/function h(ω) assigning an element of H to each ω ∈ Ω is called information set containing ω (possibility set). • Note that if H = {h1, . . . , hm} then by h(ω) we are looking for the hi where ω is contained. I.e. h(ω) : Ω → H or h(ω) → hi. • This assignment satisfies: ω ∈ h(ω) for all ω ∈ Ω. If ω = ω and ω ∈ h(ω) then h(ω) = h(ω ). 100 Expected Utility States (9) Micro I • Definition - Knowledge: An event E ∈ Ω is known at the state ω ∈ Ω if h(ω) ⊆ E. • I.e. E is known if anything possible implies it. What is known to the decision maker depends on the state ω. • See Ritzberger, page 63, Example 2.10. 101 Expected Utility States (10) Micro I • When a decision maker observes realizations of a random variable she will update her probability assignments on z. • Call π prior beliefs, and the ˜π posterior beliefs. • A decision maker regards states outside h(ω) is impossible if ˜π(h(ω)) = 1. • Only ω ∈ h(ω) are assigned with a positive probability. • The posterior probability of a set E given h(ω) is then given by the Bayes theorem: For π(h(ω)) > 0) π(E|h(ω)) = π(h(ω) ∩ E) π(h(ω)) 102 Expected Utility States (11) Micro I • Note that π(E|h(ω)) depends on ω and is therefore a random variable. • For a finite probability space with z ∈ Z we get: π(f−1 (z)|h(ω)) = π(h(ω)|f−1 (z))π(f−1 (z)) z ∈Z π(h(ω)|f−1(z ))π(f−1(z )) • Note that π(f−1 (z)|h(ω)) = π(z|h(ω)) by construction; the denominator above is different from zero. • For an infinite probability space see textbooks on Probability theory. 103 Expected Utility State Dependent Utility (1) Micro I • With VNM utility theory we have considered the set of simple lotteries LS over the set of consequences Z. Each lottery li corresponds to a probability distribution on Z. • Assume that Ω has finite states. Define a random variable f mapping from Ω into LS. Then f(ω) = lω for all ω of Ω. I.e. f assigns a simple lottery to each state ω. • If the probabilities of the states are given by π(ω), we arrive at the compound lotteries lSDU = π(ω)lω. • I.e. we have calculated probabilities of compound lotteries. 104 Expected Utility State Dependent Utility (2) Micro I • The set of lSDU will be called LSDU. Such lotteries are also called horse lotteries. • Note that also convex combinations of lSDU are ∈ LSDU. • Definition - Extended Independence Axiom: The preference relation satisfies extended independence if for all l1 SDU, l2 SDU, lSDU ∈ LSDU and α ∈ (0, 1) we have l1 SDU lSDU if and only if αl1 SDU + (1 − α)l2 SDU αlSDU + (1 − α)l2 SDU. 105 Expected Utility State Dependent Utility (3) Micro I • Proposition - Extended Expected Utility/State Dependent Utility: Suppose that Ω is finite and the preference relation satisfies continuity and in independence on LSDU. Then there exists a real valued function u : Z × Ω → R such that l1 SDU l2 SDU if and only if ω∈Ω π(ω) z∈supp(l1 SDU (ω)) pl1(z|ω)u(z, ω) ≥ ω∈Ω π(ω) z∈supp(l2 SDU (ω)) pl2(z|ω)u(z, ω) . 106 Expected Utility State Dependent Utility (4) Micro I • u is unique up to positive linear transformations. • Proof: see Ritzberger, page 73. • If only consequences matter such that u(z, ω) = u(z) then state dependent utility is equal to VNM utility. 107 Expected Utility Subjective Utility (1) Micro I • In the above settings we have assumed that π(ω) are objective probabilities. • In some applications the likelihood of an event is more or less a subjective estimate. • With subjective probability theory π(ω) are subjective beliefs. • Here the probability of an event depends on the agent’s preferences. 108 Expected Utility Subjective Utility (2) Micro I • Consider an extended expected utility formulation where u(z, ω) and π(ω) depend on preferences. • Here we need some way to disentangle the Bernoulli utility function from the probabilities. This requires a further axiom. • Definition - State Preferences: Consider the set of simple lotteries LS (with ω fixed): L1 ω L2 if and only if pl1(ω)u(z, ω) ≥ pl2(ω)u(z, ω) . • Axiom - State Uniform Preferences: ω= ω for all ω and ω in Ω. 109 Expected Utility Subjective Utility (3) Micro I • Claim: With state uniform preferences we get u(z, ω) = π(ω)u(z) + β(ω). • L1 ω L2 has to be fulfilled for all ω. Therefore pl1(ω)u(z, ω) ≥ pl2(ω)u(z, ω) has to hold for each ω. • This can only be the case if pl1(ω)u(z, ω) is a positive affine of pl1(ω )u(z, ω ) for arbitrary pairs ω, ω (transformation properties of VNM utility functions). • For notational issues and w.l.g. let us consider degenerated lotteries, here u(z, ω) is PAT of u(z, ω ) 110 Expected Utility Subjective Utility (4) Micro I • Thus, a(ω)u(z, ω) + b(ω) = a(ω )u(z, ω ) + b(ω ) • W.l.g. use ω1 as benchmark, Then a(ω)u(z, ω) + b(ω) = u(z, ω1) = u(z). • ⇒ u(z, ω) = (u(z) − b(ω))/a(ω). For all ω, a(ω1) = 1 and b(ω1) = 0. • Thus u(z, ω) = π(ω)u(z) + β(ω) with π(ω) = 1/a(ω) and β(ω) = −b(ω)/a(ω). 111 Expected Utility Subjective Utility (6) Micro I • u(z, ω) ≥ u(z , ω) for all ω holds if ω u(z, ω) ≥ ω u(z , ω) holds and vice versa with u(z, ω) PAT of u(z, ω ). • Plug in (π(ω)u(z) + β(ω)) results in ω u(z, ω) = ω π(ω)u(z) + β(ω) • The same preferences are represented if we divide all a and b by the same constant. • Choose this constant such that ω w(ω) = 1, then u(z, ω) = w(ω)v(z, ω). 112 Expected Utility Subjective Utility (7) Micro I • These weights have to correspond to the subjective probabilities to result in an extended expected utility function. • Proposition - Subjective Expected Utility: Suppose that the preference relation satisfies continuity and in independence on LSDU. Assume that these preferences are state uniform. Then there exists subjective probabilities and an extended expected utility function representing these preferences. • Limitations see e.g. the Ellsberg Paradoxon. 113 Expected Utility Knight Uncertainty (1) Micro I • Knight distinguished between risk and uncertainty. • For risk the probabilities are objectively given, for uncertainty not. • With subjective probability theory uncertainty can be once again expressed by probabilities. • Non - vNM approaches see e.g Gilboa 114