注册 投稿
经济金融网 中国经济学教育科研网 中国经济学年会 EFN通讯社

[连载]研究读书日记

  现在才发现日记的重要性. 原来做学期论文只要课程方向没错就行. 现在方方面面都要具到,不系统记下来读的论文不行了. 今天开始写日记,并发在这已享大家.请不要灌水,有什么建议或问题再跟贴,谢谢.

  到4/10 理论模型基本完成,但是老师要我做修正.修就修吧

  Apr. 10

  C. I found a very interesting paper, “Dynamic consumption and portfolio choice with stochastic volatility in incomplete markets”.  This paper solved similar problems to mine. Yet without the estimation tools I have, they were forced to solve the dynamic programming problem analytically. To achieve it, they made a serial of relatively restrictive assumptions.  (Core contribution is on top of page 11)They also estimated the model.

  I. “Why stocks may disappoint”, an interesting paper. There is large a surprisingly large variation in equity holdings. Standard portfolio choice models often predict large equity positions for most investors and fail to generate the observed cross-sectional variation in portfolio choice. Authors used lost aversion theory to explain the paradox.  With flexible setting of utility function in my model, I might explain the paradox with traditional expected utility theory.    Royal defender of old school

  M. I read J. Rust’s” How Social Security and Medicare affect retirement behavior in a world of incomplete markets”. This econometrics model is quite different from what thought. He assumed individual’s choice is also affected by unobserved state variables. He assumes effect of those variables follows extreme value distribution.  With theoretical result from his previous paper he derived the conditional distribution of choice variable. Then as we all can expect he estimate with MLE. I think assuming unobserved SV is a big contribution.

  M. “Why Youths Drop Out of High School: The Impact of Preferences, Opportunities, and Abilities” Wolpin 1999. He uses MC to approximate Numerical integration. I will probably do that.  He used Heckman and Singer (1984) to deal heterogeneity.  This is an improvement compare to J.R (1997).

  I think there is a major flaw in my estimation method. It I need to read rest of existing paper to fix it.

  Apr 11.

  M. Wolpin又见Wolpin “The Career Decisions of Young Men” He introduced an unobserved exogenous shock idea. I assume all prices are deterministic in my model. I also omitted unobservable financial shocks (car accident etc.), I probably need this idea.

  M. 还是Wolpin “Accounting for Wage and Employment Changes in the U. S. from 1968-2000:

  A Dynamic Model of Labor Market Equilibrium” Author used a very interesting model. They modeled labor market from 1968 to 2000. They assumed a partial equilibrium model with individual as supplier and whole economy as demander.  Economy fitted in dynamic model too. It accumulates capital, as state variable.

  继续修改proposal, 就是讨厌文字的东西.下面是文献回顾

  Risk Aversion:

  Expected utility is the most widely used theory to model people’s valuation over stochastic variables. Under expected utility theory one’s risk aversion is solely determined by utility function. Absolute risk aversion and relative risk aversion are commonly used parameters to measure one’s attitude toward risk. Numerous papers were published to estimate risk aversion of different groups of people toward different objects. Because of the complexity of evaluating risk aversion toward multiple objects, in most study risk aversion toward monetary value is studied. When the price of uninteresting goods is assume fixed, expenditure becomes the only stochastic variable in indirect utility function. Indirect utility function preserved all information of one’s preference.

  In (Donkers B, Melenberg B, Van Soest A, 2001) paper Based on questions on lotteries in a large household survey authors first semiparametrically estimate an index for risk aversion. They only make weak assumptions about the underlying decision process and our estimation method allows for generalisations of expected utility. In (Lence SH,2000) paper, The generalized expected utility model is fitted to U.S. farm data to estimate farm operator's time preferences and risk attitudes. He claims the estimated farmer's utility parameters are quite "reasonable" and exhibit high accuracy. (Rosenberg JV, Engle RF, 2002), This paper investigates the empirical characteristics of investor risk aversion over equity return states by estimating a time-varying pricing kernel, which is called the empirical pricing kernel (EPK). They find that the EPK exhibits counter cyclical risk aversion over S&P 500 return states. (Halek M, Eisenhauer JG,2001 )This article uses life insurance data to estimate the Pratt-Arrow coefficient of relative risk aversion for each of nearly 2,400 households. Attitudinal differences toward pure risk are then examined across demographic subgroups.

  Primary Goal:

  Because of the importance of risk aversion, it deserves a sophisticated model. The primary goal of this research is estimate risk aversion has features listed below.

  Risk aversion here, can be traced back to individual utility theory with explicit assumptions. Thus we can link hypothetical utility with real choice made by individual. In future research some assumptions, such as perfect information can be released; omitted variables such as marriage can be added to the model. With additional information of individual or cohort of individual’s wealth and income, we can give advice to different investment project.

  In this research we explicitly model people’s action of substitute utility between different time horizon through investment and education. Separating intertemporal substitution from risk aversion structurally allows us to evaluate welfare effect of events with different time horizon.

  Individual’s risk aversion can be partly characterized by her demographic and family statues variable. We can give future investment advice as well as welfare evaluation to specific individual.

  Apr 13.

  Structure Dynamic Model

  Structure Dynamic Model was first used by labor economists (Keane and Wolpin 1994, Keane and Wolpin 1997, Rust and Phelan 1997). Because it is relatively easy to add stochastic event and study stalk effect in dynamic model, there is as increasing trend of adopting this model in the labor economics literature. In these papers researcher modeled consumer makes decision over certain labor choice to achieve local maximum. Inspired by these papers, I intend model financial related decision to achieve another local maximum. In the basic model, we can estimate individual’s indirect utility function, and therefore risk aversion coefficients. By adding more shocks and choice variables, I can estimate relationships between interesting choices and states.

  IV  Data

  Although data should come after empirical model, I put it before empirical model. Data availability is the main constraint of empirical study. Main data set in this research is Nation Longitudinal Survey Original Cohorts: Older and Young Men (NSLOY66). This survey was conducted by Bureau of Lobar Statistics (BLS). In 1966 they start to interview 5020 older men age from 45 to 59 and 5,225 young men age from 14 to 24. Interview of older men discontinued at 1981, and the interview of younger men discontinued at 1990. Therefore it actually contains two data sets. One is Nation Longitudinal Survey Original Cohorts: Older Men (NSLO66); the other is Nation Longitudinal Survey Original Cohorts: Young Men (NSLY66).

  The polarization of data adds an unexpected bonus to the research. It solved long time problem and give one solution to ending point problem. I will discuss both problems in detail in fifth part. To take advantage of this bonus two separated empirical model must be estimated. NSLO66 must be estimated first, as its result will supple ending information of point for NSLY66.

  As US government practices draft in the interview period of NSLY66, I have a theoretical problem. I can either drop all observations drafted or model drafting as a exogenous shock. Drafted observation can be ignored, as drafting lottery is a pure random process, the remaining sample is still random. I may need to model it, as by going to school individual can get tempera exemption from draft. Thus draft will likely to increase the value of education.

  This survey includes 14 categories of information. They are

  1. Labor market experiences
     2. Work-related discrimination
     3. Training investments
     4. Schooling information (school records, aptitude, IQ)
     5. Military experiences
     6. Retirement plans and experiences
     7. Volunteer work and leisure-time activities
     8. Income and assets
     9. Physical well-being, healthcare, and health insurance
    10. Alcohol and cigarette use
    11. Attitudes, aspirations, and psychological well-being
    12. Geographic and environmental data
    13. Demographics, family background, and household composition
    14. Marital history, children, and dependents

  Category 1, 4, “5”, 6, 8, 12, 13 contains all data my theoretical model need. Because I have not process data yet, I cannot give summary statistic. 

  V  Issues in empirical model

  Convert problem:

  When converting theoretical model into empirical model, I meet two problems. One is the time frame of the theoretical model is too long, it is unlikely to observe entire life spam of human being. Two important points of structure change are near each end. They are from school to work and retirement. This problem is partially solved by data.

  The other is to make optimal choice in any period I need an ending point of simulation. Naturally the ending point is when the cumulative probability of individual is close enough to one. This can make ending age exceed 90 easily. Because of the curse I will discuss later, it is unacceptable.

  Numerical integration:

  To calculate the expected utility with stochastic variable with continuous pdf requires integration. Because of complexity of objective function, close form probably does not available. Numerical integration is one of the most time consuming process. Time consuming means matter of seconds. Li contains multiple stochastic variables, so multiple integrations are required to obtain utility value for single period. Keane and Wolpin (1997) introduced a method using Monte Carlo simulation to approximate multidimensional integration.

  Cursed of dimensionality:

  Setup a dynamic programming problem is not a hard task, yet solving one might be. The computational time needed grows exponentially with number of periods. This is well known curse of dimensionality.

  To relief the curse I can start from reduce possible paths available. Paths are determined by number of choice available in each period. Continues choice is usually made discrete. Traditionally break point is distributed uniformly over possible range. Unless pdf of people’s choice is uniform, this method is clearly not efficient. If I can improve the cutting efficiency, it will reduce the number of cutting points needed for given error level. The idea is distribute break points based on some estimation of CDF of people’s choice. Fortunately I observed people’s actual choice. I can estimate CDF of subsample nonparametricly and set beak points choice based on it. For instance, I estimated CDF of proportions of income one putting into investment when she is 23. I am willing to divide choice into 10 points. Then the divide points for people at 23 years old would be:

  There is not much I can do about discrete choice. I will restrict certain discrete choice in certain time frame. For instance, go to school is before 30; retire is after 50.

  Reducing number of periods that dynamic choice available is another method. In theoretical model we need to let all options open until individual is a hundred percent dead. Yet too many periods will consume too much computation time. More over if it is beyond observed periods, information gain is limited. If we use the remaining SV in a function as ending function, and estimate it, it will present maximized utility their after. This function form should be as flexible as possible, preferably nonparametric. After all, I am using a static function approximate a dynamic process.

  VI  Additional Empirical Application

  Optimal Portfolio Strategy:

  In the theoretical model adjusting portfolio is an important method to maximize utility. With parameters estimated, we can predict utility function of individual of interest. Fix all other choice variable; and maximize utility only by adjust portfolio. We can obtain different Optimal Portfolio in each stage of her life. In estimation I only allow two categories of instrument, as I only have that much data; and the model is cursed. When predicting optimal portfolio, data is not a problem and I only need to run optimization problem once.

  Insurance Valuation on Consumer Side:

  Insurances we talk here are designed to compensate potential financial lost, such as health insurance, auto insurance. The value of insurance from consumer can be estimate only if people’s utility function is known. As in this model utility function is estimated, we can estimate the value of given insurance policy. I model the stochastic even the policy against as exogenous shock. Then calculate the maximized utility. After that I put policy and an initial payment in the model. Let it compensate individual when it happened. I solve maximization problem again and adjust the initial payment to match utility to previous problem.

  Heritage and Life Insurance Evaluation:

  When estimating the model, ending function is always required. The ending function always contains terminal utility function. Terminal utility function represents individual’s value of heritage she gives to others if she dies at given age and family composition. Life insurance compensates death or serious injury. Thus it can be evaluated y process similar to it above.

  Discount Rate: (Solely Personal)

  Discount rate is widely used in welfare economics, when evaluating a project over time. If we assume claim on public welfare should not based on wealth, interest rate is clearly a bad approximation. Interest rate depends solely on demand supply of funds. In lot of studies, discount rate is a population average. When dealing with decision over a long time.

  I think people’s discount rate will rise when the expected living time reduces. This is a testable hypothesis in my model. I believe people should only be responsible to her own decision. For instance, when deciding a public project with 30 years life spam, many old people with one vote on discount rate may not leave through it; and many newly matured people with no vote will forced to endure later part of 30 years. Under my definition of fairness, we have a fairness problem. An average weighted by expected life is preferred by me.

  Retirement planning:

  It is just a prediction of AI after retirement decision was made. The prediction needs terminal utility function and estimated discount rate to be accurate though.

  Value of life

  One of the question economists most reluctant to answer, before this model, economists do not know how to estimate model maximize utility lover time. It is a small problem. A larger one is we do not know how to value non-marketable goods, such as family, children etc. it seems these goods worth much more than non-subsistent marketable goods. Yet seems, many researchers need this number right or wrong. (Hirth RA, Chernew ME, Miller E, Fendrick AM, Weissert WG, 2000) they just simulated the number in several ways and got 150 citations.

  Learning by Living

  In most research, researchers were forced to make assumptions they do not like. Among those, there is one they hate most. I hate the perfect information assumption most. I figured out a way may be feasible to deal it. Now it is just too early to work on it.

  Making marriage and having child CV

  Marriage can be modeled as an exogenous shock representing meeting someone she can marry to. When this exogenous shock happens, a CV marriage becomes available. Having child is reversed process; a decision is made first, then waiting for result. When we modeled these two decisions, we can estimate their value to individual.

  原帖地址:http://bbs.efnchina.com/dispbbs.asp?boardid=33751&ID=395165

文章评论
关注我们

快速入口
回到顶部
深圳网站建设