• No results found

Volatility dependant monetary policy

N/A
N/A
Protected

Academic year: 2021

Share "Volatility dependant monetary policy"

Copied!
46
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

VOLATILITY

DEPENDANT

MONETARY

POLICY

Author: Hugo Evers

Superviser: Dr. Makarewicz

Studentnumber:10208895 Bachelorthesis Econometrics Universiteit van Amsterdam 2016-07-16 Abstract: This paper present a monetary policy based on the popular Taylor type rule, where the coefficient which governs the response to deviations in inflation to its steady state level, is dependant on the volatility in the GDP. A framework is proposed for obtaining a period estimate of volatility, setting control limits and transforming this input into an implementable coefficient in the Taylor type rule. The optimality a more or less aggressive coefficient is analysed in terms of welfare costs, which showed a deviating by +/- half the optimal coefficient value, to obtain superior welfare cost minimisation.

(2)
(3)
(4)

Declaration of own work. I Hugo Evers hereby declare this to be my work.

(5)

Contents

1 Introduction 6 2 research methods 8 2.1 Monetary policy (Taylor type rule) 8 2.2 Inflation adjustment coefficient 9 2.3 EWMA Volatility estimator 11 2.4 Upper and Lower Control Limits 12 2.5 Measure of welfare 14 3. The simulated economy 16 3.1 The model 16 3.1.1 Households 16 3.1.2 Firms 19 3.1.3 Government 21 3.2 calibration 23 3.2.1 structural parameters 23 3.2.2 shocks 24 4. Results 25 4.1 output gap and inflation IRFs 25 4.2 welfare 26 4.3 variables of economic interest IRFS 28 4.4 monetary policy function 29 5.1 Conclusion 32 5.2 Discussion 33

(6)

1

Introduction

In the aftermath of the 2008 financial crisis and its subsequent recession, concerns about financial stability and the risk of deflation highlighted the the importance of sound monetary policy. The rise and fall of stock values is driven by economic agents’ expectations of its future value. Due to the inherit feedback present in the inertia of expectations and its effect on demand, the potential for economic bubble formation is created. After the bubble bursts, the economy is left with a shock in the supply and demand of liquid assets. At this point central banks employ monetary policy to ease the effects of such a crisis and stabilize the economy. As a first response to the 2008 crisis many central banks have lowered the interest rates to the so called zero lower bound to stimulate the economy and for banks to meet the demand for liquid assets, which arguably doesn’t work. Because of a lack of demand in the economy and the high rate at which banks have rejected loan applications. Which causes firms to be unable to obtain loans on terms which would be favourable enough for them to realize profit opportunities through investments (Pollin, 2012). A prolonged period of low interest following a substantial long lasting shock could lead to monetary policy becoming ineffective (Eggertsson and Woodford, 2003). This is called a liquidity trap, and functions as a self-fulfilling prophecy in the expectation framework. Monetary policy should be employed such that it can steer the economy out of a deflationary spiral before it gets there.

A supply shock itself there is characterized by a spike of high volatility which slowly decays. High output levels of volatility not only function as indicator for price instability, sharp differences in inflation also cause considerable damage to the economy on its own. An increase in real interest rate volatility triggers a fall in output, consumption, investment, and hours worked (Villaverde et al, 2011). Therefore, monetary policy should aim to stabilize fierce deviations from its steady state.

So monetary policy should optimally stabilize the economy under two scenarios, during the shock where the volatility should be minimized, and shortly after where a possible deflationary spiral should be prevented.

In order to explore those scenarios, the agents’ expectations are assumed to be rational. Rational expectations make two very strong assumptions, economic agent (firms and households) make optimal decisions, and they perfectly know the economy and therefore are

(7)

able to perfectly predict future values and all deviations are due to random errors. This allows the precise modelling of these stochastic deviations (Sims, 2003). This is calculated using mathematical perturbation methods, available in the software package called Dynare. In this paper I propose a set of inflation level targeting dependent the state of the economy in terms of volatility. The intuition behind this is, it allows the monetary authority to use a different (more aggressive) policy during the shock the stabilize, and a more passive policy when the economy is stabilizing after the shock to prevent a liquidity trap, and to let more room for the economy to find its own steady state Since a supply shock causes twofold potential economic damage, it is worthwhile to model output level volatility response to a shock, and test whether monetary policy consisting of two rules rules which bind at different stages of the shock would provide better stabilization at these stages. In order to test whether this type of stabilisation is optimal, volatility estimator based on the popular RISKMETRICS estimator, which is widely used in finance. After which a coefficient rule will be proposed which transform this input into implementable policy. This volatility dependant rule will be tested with regards to its effect on welfare, by varying amount by which the coefficient diverges (how far apart its extreme values will lie). This paper is divided in the following sections: section 2 presents the adaptations made to the original model this paper builds on. Section 3 presents the economic environment on which this paper builds and in which the rule will be tested. Section 4 presents the results obtained from the analysis in Dynare. Section 5 concludes and discusses the applicability of the obtained results.

(8)

2 research methods

In order to test the hypothesis whether monetary policy should react more strongly to deviations when volatility is high will be tested in the following manner. Firstly, with regards to the monetary authority, its the monetary policy rule will be described. After which the coefficient rule will be discussed. Since volatility is assumed to not be ergodic (different through time) and mean reverting, a point wise estimator is presented from the popular RiskMetrics family of estimators. After which its upper and lower control limit will be presented. At last the metric of whether the monetary policy could be deemed successful, which is chosen to be consumer welfare, is presented.

2.1 Monetary policy (Taylor type rule)

The central bank sets the nominal interest rate by adjusting its instrument in response to deviations of inflation and output from their respective target levels rate according to a simple Taylor (1993)-type rule: log (𝑅' 𝑅∗) = 𝛼,log ( 𝑅'-. 𝑅∗ ) + 𝛼0𝐸'log ( 𝜋'-3 𝜋∗ ) + 𝛼4𝐸'log ( 𝑦'-3 𝑦∗ )

Where R* , 𝑦∗ , 𝜋 represent steady states for the nominal rate (gross rate), output and

inflation respectfully. And 𝑦'-3 represents output, 𝜋'-3 inflation in period i = -1, 0, 1. In this

paper i=0 is chosen, which means the monetary authority sets its interest as a reaction to deviations in same period.

Furthermore the 𝛼, parameter determines the degree of interest rate smoothing and is set

to 0 in order to optimally study the effect volatility dependence introduces. The 𝛼4 parameter

(9)

accordance with the optimal value for this parameter when 𝛼0 can take on values between 1 and 3 in order to achieve optimal policy in terms of welfare. Which leaves the 𝛼0 parameter, which governs the reaction to the deviations in inflation to its steady state. This coefficient is modelled as a function of variance in this paper, and is explained in the following paragraph.

2.2 Inflation adjustment coefficient

The inflation adjustment coefficient (𝛼0) is modelled as a logistic asymptotic interpolation function. Such that when volatility reaches values outside of its 95% confidence interval, the coefficient reach its asymptote and stay the same. Therefore: 𝛼0 = 𝑈 − 𝐿 𝑄exp (−𝑘𝜔') + 1+ 𝐿

Where U and L denoted the upper and lower values the coefficient will take on, and 𝜔'

denotes the variance, therefore 𝐿 < 𝛼0 < 𝑈.

This interpolation function is calibrated such that for a measured variance in period which lies on the 0.05 confidence interval for the distribution for the variance estimator denoted by 𝜔'A,

𝛼0(𝜔'A) = 𝐿.

And for a variance value which lies on the 0.95 confidence interval denoted by 𝜔'B ,

𝛼0(𝜔'B) = 𝑈. The lower and upper values L and U are parameterized by setting (𝐿, 𝑈) = (𝑐 − 𝛥, 𝑐 + 𝛥). After which the shape parameters are set such that the relative shape corresponds to the wanted features. Q is set to 𝑒GHIJKLMHINM, such that r represents the inflection point in between the outer interval values for the variance.

(10)

And k is set to HOG

I JPH

I

N such that the “bending effect” caused by k(which determines how

sudden is switched been the high and low coefficient value) is in proportion to the values for the variance.

The values of 𝜔'A and 𝜔'B (which are discussed in section 2.4) together with the shape

parameters are calibrated such that 𝐸'𝛼0 𝜔' = 𝑐 The complete function is given by: 𝛼0(𝜔'A, 𝜔 'B, 𝑐, 𝛥, 𝜔') = [𝑐(𝑒 OGHIJ HIJPHIN − 𝑒 OGHIN HIJPHIN)(𝑒 OG HIJ KLM HIN M HIJPHIN + 𝑒 OGHI HIJPHIN) + 𝛥(exp (2𝑘 𝜔'A .-S 𝜔'A S+ 𝜔'B S 𝜔'A+ 𝜔 'B ) − 2exp (2𝑘 𝜔'A .-S 𝜔'B S+ 2𝑘𝜔' 𝜔'A + 𝜔 'B ) + exp (2𝑘 𝜔'A .-S 𝜔'B S + 2𝑘𝜔'B 𝜔'A+ 𝜔 'B ) − 𝑒 OG HIJPHI HIJPHIN − 𝑒 OG HINPHI HIJPHIN + 2𝑒OG)]/ ((𝑒 OGHIJ HIJPHIN − 𝑒 OGHIN HIJPHIN)(𝑒 OG(HIJ)KLM(HIN)M HIJPHIN + 𝑒 OGHI HIJPHIN)) Which is plotted for different values of k in figure 1:

(11)

Figure 1, coefficient function for different values for parameter k

2.3 EWMA Volatility estimator

Since volatility is an unobservable feature of the simulated economy, it can only be approximated using statistical models, the simplest and most realistic of which is based on the RISKMETRICS estimator popularized by JP MORGAN. The estimator is derived as an incrementally updated exponentially weighted variance estimator, which by assigning the highest weight to the latest observations is able to capture the dynamic features of volatility. The volatility in the natural output is used as the measure for instability in this paper. The estimators for the mean and variance are given by the following formulas: µ'= 1 − 𝛿X µ'-.+ 𝛿X𝑦' 𝜔' = 1 − 𝛿H 𝜔'-.+ 𝛿H 𝑦'− µ' 𝑦'− µ'-. The estimator consists of an exponentially weighted moving average estimator for the mean value (the coefficient is denoted by 𝛿X and is set to 𝛿X = 1/1000) and an exponentially

weighted moving average estimator for the variance, which uses a coefficient based on RISKMETRICS for quarterly data (and is set to 𝛿H = 7/100 ). The estimators are implemented according to (Tony Finch 2009) which derive a consistent estimator. -1.0 -0.5 0.5 1.0 -0.5 0.5 1.0 1.5

(12)

2.4 Upper and Lower Control Limits

Due to the heavy kurtosis and long tails observed in the estimator values, the values 𝜔' are assumed to follow a log-normal distribution, which is a common feature of EWMA estimators in finance literature (Fibazzo 2008). 𝜔' ∼ LogNormal (𝜇HI, 𝜉) This assumption allows for the upper and lower bounds of variance to be updated during the simulation. Since 𝛼0(𝜔'A, 𝜔'B, 𝑐, 𝛥, 𝜔'), is dependant on properly chosen bounds(𝜔'A and 𝜔'B)

when these bounds are not specified correctly, 𝐸𝛼0 ≠ 𝑐 and the rule shows erratic behaviour.

Therefore the fitted distribution values 𝜇HI are estimated from the previous simulation(which

would provides a close estimate for the 95% interval in the next simulation when the difference in 𝛥 is small between simulations. After which the simulation is run again for the same value of 𝛥 using the updated bounds.

During the simulation, the bounds are updated using the assumption 𝜔' ∼

LogNormal 𝜇HI, 𝜉 where 𝜇HI and 𝜉 are assumed to be ergodic and are updated during the

simulation. Therefore, the only movement in the moments of this estimator are supposed to be from a small error due to basing the initial guess for the values on the previous simulation where a different value for 𝛥 was used.

The moments and bounds are updated according to the following method:

Given that 𝜔' ∼ LogNormal 𝜇HI, 𝜉 , after applying a logarithmic transformation the

following distribution is obtained: log 𝜔' ∼ Normal µdef HI , 𝜉def HI .

Where µdef HI and 𝜉def HI are calculated according to the same methods presented in

section 2.3, according to:

µdef HI = 1 − 𝜆h µdef HI-. + 𝜆hlog 𝜔' µdef (Hi) = 𝐸[log(𝜔jklmneoppnqodrsnet)]

𝜉def HI = 1 − 𝜆h 𝜉def HILK + 𝜆h log 𝜔' − µdef HI

O

𝜉def Hi = Var log 𝜔jklmneoppnqodrsnet

(13)

After which the Estimator is transformed to a standard normal distribution, as follows: ℤ'= Log(𝜔') − µdefHI 𝜉defHI Where ℤ'∼ Normal 0,1

For this transformed variable an approximation for the quantile function 𝐹-. 𝑝 = 𝜔 yz{ where 0 ≤ 𝑃 ≤ 1, can be applied. By inverting he following general relation: Log ℕ 𝜁, µ, 𝜎 𝑑𝜉 ‚ ƒ = F 𝑥 = 𝑝 =1 2 erf log 𝑥 − 𝜇 2𝜎 + 1 The following inverted relation is obtained for the confidence intervals: 𝜔'z{ = Exp 2𝜉

def HI ∗ erf-. 2𝑝 − 1 + µdefHI

Where the subscript t indicates this bounds has been updated in period t, however due to the very small coefficient used in calculating the moments no sudden movements in these values occur. The approximation (Winitzki, Sergei 2008) used to calculate the inverse error function during the simulation is: erf-.(𝑥) ≈ ( 2 𝜋𝑎+ 1 2log (1 − 𝑥O))O− log (1 − 𝑥O) 𝑎 − ( 2 𝜋𝑎+ 1 2log (1 − 𝑥O)) With a = 0.147 Using this relation the 0.05 and 0.95 confidence interval are calculated, in order to use these as asymptotic bounds for the logistic interpolation function for 𝛼0 presented in section 2.2. After each simulation, the lognormal distribution is fitted using maximum likelihood, and the quantiles are obtained using the samples. These initial values are used to find more accurate

(14)

starting points for the running moments and confidence interval estimators used during the simulation.

2.5 Measure of welfare

In order to evaluate whether this alternative policy formulation would abide by the monetary authority’s objective, the equilibrium must meet two requirements. Firstly, in the equilibrium, a determined steady state must be obtained. And secondly, for the monetary policy to be optimal, it must maximize welfare (or minimize welfare loss). The measure used to evaluate the latter, is the sum of discounted consumer welfare. The monetary authority’s objective is to find a value for 𝛥 such that a maximum is obtained for: 𝑉' ≡ 𝐸' 𝛽•U 𝑐 •P', ℎ•P' • •‘ƒ The welfare loss is calculated in comparison to the optimal Ramsey policy (which is to policy where the coefficients are optimally chosen every period). The welfare loss is defined as the amount of consumption a consumer is willing to give to be as well of as under the optimal Ramsey policy. 𝑉ƒ = 𝐸' 𝛽'U 1 − 𝜆𝑐 ', ℎ' • '‘ƒ

Given the complexity of the economic environment studied in this paper, the measure of welfare is approximated accurately to the second order. 𝑉ƒ = 𝑉ƒS 1 − 𝜆 .-“+ 1 − 𝜆 .-“− 1 1 − 𝛽 1 − 𝜎 Where 𝜆 is unconditionally approximated using:

(15)

𝜆” 𝜎•O(𝑉ko− 𝑉ro)

2( 11 − 𝛽 + (1 − 𝜎)𝑉ko)

where 𝑉ko and 𝑉ro denote the unconditional expectation of welfare obtained under Ramsey

policy (𝑉ko) and under the alternative policy proposed in this paper (𝑉ro).

(16)

3. The simulated economy

This section presents the economic model and its calibration (Schmidt, Uribe, 2007) in which this thesis builds, first the model consisting of nonlinear equations will be presented, after which the calibration of the parameters and shocks are discussed.

3.1 The model

The model which serves as the basis for the analysis of volatility dependant Taylor rules is a real business cycle model with capital accumulation, and endogenous labour supply driven by technology and government spending shocks. It features New-Keynesian friction elements; the sources of which are as follows: 1, Sticky prices as nominal rigidities. 2, Firms with a demand for money motivated by a working-capital constraint on labour costs. 3, A cash in advance constraint which motivates a demand for money in households. 4, A production market which features monopolistic competition. 5, distortionary taxation which varies in time. These frictions in the economy motivate the central banks’ effort to stabilize the welfare costs incurred from deviations around the steady state, which would resolve without welfare in a frictionless environment. The model consists of a continuum of households, firms and the central bank.

3.1.1 Households

The economy features a continuum of identical households whom maximise the following objective function over consumption (denoted by Ct) and labour (denoted by Ht). 𝐸' 𝛽– –‘ƒ • 𝑈 𝑐', ℎ'

(17)

Where Et denoted the expectation based on information available at time t, 𝛽 represents the subjective discount rate, and U represent the period utility function which is assumed to be denoted by: U 𝑐, ℎ = 𝑐 1 − ℎ — .-“− 1 1 − 𝜎

The consumption good is a composite made of a continuum of intermediate goods Ci,t , where 0<i<1. The intermediate goods are packaged and sold according to the following aggregator function: 𝑐' = 𝑐ns .-. ˜𝑑𝑖 . ƒ . .-.˜ Where the intratemporal elasticity of substitution across different varieties of consumption goods is denoted by 𝜂. The optimal level of consumption of the composite good of each variety i, purchased in period t must be such that it minimizes the total expenditure subject to the aggregation constraint. Which is given by: 𝑐ns = 𝑐'𝑝ns 𝑝'

Where Pit denotes the nominal price of good i in period t. The nominal price index (pt) is given by: 𝑃' = .𝑝ns.-˜𝑑𝑖 ƒ . .-˜ The household budget constraint is given by:

(18)

𝐸'𝑑','P. 𝑥'P. 𝑝' 𝑐'+ +𝑚'œ+ 𝑖'+ 𝜏'A =𝑝'-.𝑚'-. œ 𝑝' + 𝑥' 𝑝'+ 1 − 𝜏' ž 𝑤 'ℎ'+ 𝑢'𝑘' + 𝛿𝑞' ∼ 𝜏'ž𝑘 '+ 𝜙' ∼ where d denoted the stochastic discount factor, The variable k denotes capital, i denotes gross investment, 𝜙' ∼ denotes profits received from the ownership of firms, 𝜏'ž denotes the income tax rate, and 𝜏'A denotes lump-sum taxes. The variable 𝑞' ∼ denotes the market price of one unit of installed capital, 𝛿𝑞' ∼ 𝜏'ž𝑘 'represents a depreciation allowance for tax purposes. The capital stock is assumed to depreciate at the constant rate d. The evolution of capital is given by 𝑘'P.= 𝑖'+ (1 − 𝛿)𝑘' Where the capital adjustment cost is parameterized as: 𝜓 (𝑥) = 1 −1 2𝜓(𝑥 − 1)O The demand for investment is denoted by: 𝑖ns = 𝑖' 𝑝ns 𝑝' -˜ A borrowing constraint applies such that household cannot partake in Ponzi schemes. The choices made by the households in this economy therefore are subject to distortion in both the leisure-labour choice and the decision to accumulate capital over time, caused by the income tax rate as well as the opportunity cost of holding money (the interest rate).

(19)

3.1.2 Firms

Each good is produced by a single firm in a monopolistically competitive environment. The firm chooses contingent plans for Pit, hit, kit and mfit to maximize the present discounted value of profits, given by: 𝐸' 𝑃𝑑',–𝜙np • –‘'

Each firm produces output using capital (kit) and labour services (hit). The production technology is given by: 𝑧'𝐹(𝑘ns, ℎns) − 𝜒 The variable 𝑧' denotes an exogenous, aggregate productivity shock, χ introduces fixed costs of production and the production function (F) is of the Cobb–Douglas form assumed to be homogenous of degree one, concave, strictly increasing in both arguments which is given by: 𝐹 (𝑘, ℎ) = ℎ.-¦𝑘¦ The aggregate demand for good i is given by: 𝑎ns = 𝑎'𝑃ns 𝑃' -˜ Where the aggregate absorption is given by: 𝑎' ≡ 𝑐'+ 𝑖'+ 𝑔' Demand for money by firms is introduced by assuming wage payments are subject to the following cash-in-advance constraint:

(20)

𝑚ns¨ ≥ 𝑣¨𝑤 'ℎns

where 𝑚ns¨ denotes the demand for real money balances by firm i in period t, the νf ≥ 0 parameter denotes the fraction of the wage bill that must be backed with monetary assets. Letting bond holdings of firm i in period t be denoted by Bf , the period-by-period budget it constraint of firm i can be written as follows: 𝑀ns¨+ 𝐵ns¨ = 𝑀ns-.¨ + 𝑅'-.∗ 𝐵ns-.¨ + 𝑃ns𝑎ns− 𝑃'𝑢'𝑘ns− 𝑃'𝑤'ℎns− 𝑃'𝜑ns It is assumed the profit-distribution policy of firms is such that the initial financial wealth and the financial wealth held in period 1 is zero. These conditions imply the real profits of firm I at period t can be denoted by: 𝜙ns≡ 𝑃ns 𝑃'𝑎ns− 𝑘ns𝑢'− ℎns𝑤'− 𝑚ns(1 − 1 𝑅') Sticky prices are implemented according to the Calvo (1983) and Yun (1996) mechanism, such that each period a fraction of randomly picked firms represented by (1-α) ∈ [0, 1), can change their prices optimally. If firm i gets to choose the price in period t, and let 𝑃~ns denote the chosen price which maximizes the following objective: 𝐸' 𝑑',–𝑃–𝛼–-' 𝑃~ns 𝑃– .-˜ 𝑎–− 𝑢–𝑘np− 𝑤–ℎnp 1 + 𝜈¨ 1 − 𝑅-. • –‘' + mcnp 𝑧F 𝑘np, ℎnp − 𝜒 − 𝑃 ~ ns 𝑃 -˜ 𝑎

(21)

3.1.3 Government

The monetary policy rule has already been laid out in section 2.1, therefore this section only deals with the fiscal policy implemented in the simulated economy. The period budget constraint is given by:

𝑀'+ 𝐵' = 𝑅'-.𝐵'-.+ 𝑀'-.+ 𝑃'𝑔'− 𝑃'𝜏'

Where 𝑀' represent the money printed by the government, 𝐵'

represent the nominally risk-free bonds, 𝑃'𝜏' represents the taxes collected, 𝑔' denote the exogenous expenditure stream,

and 𝑅' represent the aforementioned(section 2.1) interest rate set by the government.

Combining this expression with the optimality conditions associated with the household’s problem yields The following relation between the optimality conditions from with the household’s problem and the the inverse of the period-t price of a portfolio which pays one dollar every period (𝑅'= 1 𝐸'𝑑','P.), yields the Euler equation denoted by: 𝜆' = βR'𝐸'𝜆'P. 𝜋'P. In order to rewrite the budget constraint, the following definitions are defined:

The variable gt denotes per capita government spending. It is assumed the government minimizes the cost of gt, a public demand for each type i of intermediate good, is given by 𝑔ns = ³´µ

³I

𝑔', Let 𝑙'-. ≡·ILK³,ILKP¸I

ILK denote total real government liabilities outstanding at the end of period t-1 and let mt=Mt/Pt denote the real money balance in circulation. The government budget constraint can be written as: 𝑙'-. = 𝑅'(𝑔'− 𝜏') +𝑙'-.𝑅' 𝜋' − 𝑚'(𝑅'− 1) The gross consumer price inflation is calculated by 𝜋' ≡³³I ILK ,

(22)

Fiscal regime is defined by: 𝜏'− 𝜏∗ = 𝛾 .(𝑙'-.− 𝑙∗)

Where 𝜏' represent the total tax revenue 𝜏' = 𝜏'A+ 𝑦'𝜏'ž, which consists of lump-sum

taxation denoted by 𝜏'A and 𝑦'𝜏'ž denotes the revenue from income taxation, where 𝑦'

denotes aggregate demand. The combination from the fiscal regime and the government budget constraint leads to the following fiscal policy: 𝑙'-.= 𝑅' c 1 − 𝜋'𝛾. 𝑙'-.+ 𝑅' 𝛾.𝑙∗− 𝜏∗ + 𝑅'+ 𝑔'− 𝑚' 𝑅'− 1 Where 𝛾. determines whether the fiscal policy is either passive or active. In this paper, the fiscal policy is assumed to be passive, where a value at the extreme of whether the policy is active is chosen which is 𝛾. =0O∗. in this case, in a stationary equilibrium near the deterministic steady state, deviations of real government liabilities from their non-stochastic steady-state level grow at a rate less than the real interest rate. discounted value of government liabilities is expected to converge to zero regardless of the stance of monetary policy.

(23)

3.2 calibration

3.2.1 structural parameters

This section presents the calibration of the values of the deep structural parameters of the original model (Schmidt Uribe, 2007) on which this paper builds.

To obtain the deep structural parameters, the model is calibrated to the U.S. economy, choosing the time unit to be one quarter. It is assumed the economy is operating in the deterministic steady state of a competitive equilibrium in which the inflation rate is 4.2 percent per annum, the average growth rate of the U.S. GDP deflator between 1960 and 1998. In addition, it is assumed that all government revenues originate in income taxation. The share of government purchases in value added is required to be 17 percent in steady state, which is in line with the observed U.S. post-war average. A steady-state debt-to-GDP ratio of 44 percent per year is imposed, which corresponds to the average federal debt held by the public as a percent of GDP in the United States between 1984 and 2003. the preference parameter γ such that households spend on average 20 percent of their time to work, as is the case in the U.S. economy according to Prescott (1986). Table 1 presents the deep structural parameter values calibration used in the original model.

Table 1:

Parameter Value Description

𝜎 2 Preference parameter 𝜃 0.3 Cost share of capital 𝛽 1.04-1/4 Quarterly discount rate 𝜂 5 Price elasticity of demand 𝑔 0.0552 Steady state of government purchases 𝛿 1.11/4-1 Quarterly depreciation rate

(24)

Vf 0.6307 Fraction of wage payments held in money Vh 0.3496 Fraction of consumption held in money 𝛼 0.8 Share of firms who can change their price 𝛾 3.6133 Preference parameter 𝜓 0 Investment adjustment cost 𝜒 0.0968 Fixed cost parameter 𝜌g 0.87 Serial correlation of government spending 𝜎¼½ 0.016 S.D of innovation to government spending shock 𝜌z 0.8556 Serial correlation of productivity shock 𝜎¼¾ 0.0064 S.D of innovation to productivity shock

3.2.2 shocks

The shocks to the steady state of government purchases gt and productivity zt are parameterized as in Schmitt-Grohe and Uribe (2006a). The shock to government purchases is assumed to follow a AR(1)-process of the following form: log (𝑔' 𝑔_ ) = 𝜌Àlog ( 𝑔'-. 𝑔_ ) + 𝜀'À

Where 𝑔_ is a constant. The first-order autocorrelation (𝜌À) is set to 0.87 and the standard

deviation of 𝜀'À to 0.016. Productivity shocks are also assumed to follow an AR(1) process according to: log (𝑧') = 𝜌Âlog (𝑧'-.) + 𝜀'Â

Where the first-order autocorrelation 𝜌Â = 0.856 and the standard deviation of 𝜀'Â is 0.0064.

(25)

4. Results

This section present the results obtained from implementing the monetary policy proposed in section 2 in the simulated economy put forth in section 3. Firstly, the impulse response functions for the output gap and inflation will be presented for different values of 𝛥, secondly the effect on welfare. After which the impulse response functions for several economic variables of interest to welfare will be shown. This section concludes with an analysis of the rule itself.

4.1 output gap and inflation IRFs

(26)

The following figure (figure 2) presents the impulse response functions for GDP and Inflation to the productivity shock and government spending shock respectively. Figure 2, Impulse response functions for GDP and inflation to shocks Figure 2 shows higher values of 𝛥 cause the shocks to be resolved faster within the first 40 periods for the productivity shocks. Government spending shocks tend to resolve with very small differences between 𝛥 when it comes to GDP, the effect 𝛥 has on inflation seems to be mare dramatic since this shock decays over a long period of time. These results follow from the increase in coefficient during the shock. Only the impulse response functions for 𝛥 between 0 and 6/10 have been drawn, since the equilibrium becomes indeterminate (see section 4.2). Also the impulse response functions take on a wobbly appearance, the plots are sampled by replicating the same shock 50 times and taking the average (for each impulse response function), however due to the nonlinear nature of the underlying model, and nonlinear behaviour of the Taylor type rule, it would take an infeasible amount of replications to smooth this out.

4.2 welfare

(27)

In this section the optimality of the proposed the volatility dependant monetary policy rule is addressed. The following figure (figure 3) presents the median discounted welfare loss over simulated 1000 periods for 0 ≤ 𝛥 ≤ 𝑐 divided in 20 steps and the 0.05 and 0.95 quantiles. Figure 3, graph of median, 0.95 and 0.05 quantiles of welfare obtained for different values of delta

The volatility responsive rule proves to perform superior to the normal rule (where the coefficient is fixed) for values of 𝛥 around 0.5. Which means the coefficient 𝛼0 takes on values between 2𝑐 during high volatility and ’O. Since c=3, ÃO< 𝛼0 < 6. Which yields an 8.7% reduction in welfare loss around its optimal value for delta. This figure implies a Taylor rule coefficient which is unresponsive to volatility (𝛥 = 0), is less optimal when compared to a monetary policy which does react more strongly in response to volatility, however only up to a certain point. When the reaction is too strong it increases welfare loss after which the equilibrium becomes indeterminate when 𝛼0 takes on values below 1. This is in line with the findings in the original paper(Schmidt, Grobe,Uribe 2007), who also report an indeterminate equilibrium for 𝛼0 < 1.

(28)

4.3 variables of economic interest IRFS

This section presents the Impulse response functions for the several economic variables to the productivity and government spending shock.

Firstly, the Impulse response function to a productivity shock for the following economic variables, Labour supply (hours worked), Capital Utilisation, Marginal costs, Investment, Consumption and Wages.

Figure 4, IRFs for several variables to Productivity shock

The following figure (figure 4), presents the Impulse response functions for the aforementioned economic variables to a government spending shock.

(29)

Figure 5, IRFs for several variables to Government spending shock

There is a clear difference between the response to a productivity shock and a government spending shock. The volatility dependence seems to make very little difference to the latter type of shock. This could either be due to a government spending shock having a small impact on volatility in GDP, or more likely, responding stronger to deviations in inflation doesn’t resolve the government spending shock faster.

Furthermore, the same pattern of stronger dampening of shocks doesn’t occur. Especially for the most significant variables capital utilisation and wages. In terms of wages, the rule dampens the shock stronger for higher values of 𝛥, however the shock to capital utilisation seems exacerbated in terms of welfare, since lower capital utilisation occurs in response to a productivity shock.

4.4 monetary policy function

This section presents the analytics pertaining to the monetary policy itself. The following figure(figure 5) presents the 𝛼0 function, simulated over 170 periods.

(30)

Figure 6, α_π function The function is plotted for 𝛥 =0.01, function binds when volatility reaches its 0.05 or 0.95 intervals, it these points the coefficient reaches its minimum/maximum value (in this case 2.99 and 3.01) in order to prevent large peak values which could disturb the economy on its own. The Taylor rule coefficient function 𝛼0 is a function of volatility, when the measured volatility estimates present the following histogram(figure 6). 0 20 40 60 80 100 120 140 160 180 2.99 2.995 3 3.005 3.01 3.015

(31)

Figure 7, volatility histogram

According to the Taylor type coefficient rule(see section 2.3) which is a type of logistic interpolation function , the output of 𝛼0 presents the following histogram(figure 7). Figure 8 α_π histogram These figures aim to provide some clarity as to how the coefficient transforms the input (GDP volatility) to output (the coefficient in a Taylor type rule). #10-4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 0 10 20 30 40 50 60 70 80 90 100 Histogram GDP volatility 2.9850 2.99 2.995 3 3.005 3.01 3.015 20 40 60 80 100 120 140 160 Histogram Alpha-PI

(32)

5.1 Conclusion

This paper looked at whether a more aggressive approach to monetary policy (in terms of the coefficient in the Taylor type rule which governs the response to deviations from steady state inflation) would be optimal. In order to do so, a mechanism which depends on volatility is used to recognize whether the economy is in the steady state or experiencing a shock. This allows the monetary authority to act differently during a shock vs. in the steady state. This paper looks at whether a stronger response to deviations minimizes welfare loss. The results in section 4.1 show a higher coefficient value mutes the response of the natural output and inflation to a productivity and government spending shock to a stronger degree. Which is logical since during the shock, the rule binds more strongly and therefore reduces the impact the shock has on GDP and inflation. This mainly serves to underline the premise of this paper, which is a shock causes volatility, therefore reacting more strongly during a shock minimizes its impact. However, since the monetary authority’s aim is to maximize welfare in the economy (or minimizing welfare loss), the results from section 4.2 are paramount in this paper’s analysis. When the economy is operating under values which are far from its optimum, it takes agent’s a certain amount of time to adjust (which in this paper is modelled by 5 distinct frictions (see section 3.1)) operating outside of this steady state causes welfare loss, it is therefore the monetary authority’s imperative to correct this. In terms of welfare, a stronger reaction to deviations during a shock proved favourable by an 8% margin. When the difference in coefficient aggressiveness surpassed this optimum however welfare loss was causes and eventual indeterminacy. Since the rule is approximated in its steady state, it is assumed the monetary authority applies this coefficient throughout. The indeterminacy occurs than in response to a rise in expected inflation, the nominal rate does not increase sufficiently to raise the real rate and self-fulfilling bursts of inflation and output become possible. When this rise

(33)

in expected inflation is entirely expected by economic agents, and does not cause volatility through the several frictions, the rule does not bind and therefore does not leads to a fall in real rates when necessary, which in turn, fuels the boom. Section 4.3 presents the impulse response function for several economic variables of interest, which are tied to welfare. The results show the same perseverance in muting the response to a shock with regards to the productivity shock, the government spending shock however is not resolved in a significantly stronger way for higher values of 𝛥. The most significant parameter with regards to welfare show a positive correlation a stronger rule, the diminished capital however utilisation could could explain why these benefits eventually cancel out and increase welfare loss for 𝛥 > 0.55.

Section 4.4 concludes by graphing the way the coefficient function transforms the input (volatility in the GDP) into a coefficient used in the Taylor type rule. The rule binds for values outside of the confidence interval, in order to prevent large peak values which could disturb the economy on its own. Also in order to be more in line with the general application of the Taylor type rule, where a fixed coefficient is used.

5.2 Discussion

Whether the conclusions drawn from the results in section 4 are transferrable to the real world is very difficult. Firstly, there are many critiques on using and rational expectations and DSGE models in general. With regards to whether the base model itself would be useful for policy analysis, it has been noted (Chari, Kehoe) model builders tend to add many free parameters and overly general economic mathematical function to describe agent’s behaviour such that over fitting occurs, which causes these models to not be structural. Which makes their usage for advising policymakers limited, this is illustrated in one of the most important DSGE models (Smets Wouters 2007) which had very poor predictive power.

This model however does not fall prey to these issues, its specification is build on a solid foundation in microeconomic principles and is calibrated to US data by hand using only two shocks. The addition proposed in this paper is based on empirical evidence (see section 1) and

(34)

intuition. The simulated economy however does not lend itself to advise policymakers, since these results are highly stylized/contextual.

Another important point of discussion in this paper surrounds the rational expectations hypothesis. Since it is intuitively clear economic agents do not act rationally nor do they know the entire economy except for stochastic processes. Therefore, this paper tries to extent the framework by allowing the monetary authority to act differently in the steady state vs during a shock. This result however is obtained within the computational framework of Dynare. Dynare approximates the mathematical functions which describe agent’s behaviour around the steady state and determines the dynamic equilibrium from there. Which means the monetary authority which acts differently during a shock, can only be simulated up to a certain degree, and means this paper merely introduced an element of bounded rationality. Where the monetary authority sets a coefficient as a reaction to whether the volatility takes on a high or low value. This indicator of whether the economy is operating in the steady state could be very interesting with regards to complex nonlinear behaviours. This is however not feasible using the calculation methods available through Dynare, since it would require taking the Taylor approximation in different points (when functions exhibit kinks for example) and combining these. Therefore, the full range of complex behaviours is not captured using Dynare.

The last point of discussion is whether this simple Taylor type rule would be the most intuitive way to implement complex behaviour. Since the Taylor rule already implies a restorative action undertaken by the monetary authority in order to create balance, researching whether doing so in a more aggressive fashion under certain circumstances is limiting. However, since the monetary authority is assumed to only have an instrument at its disposal, and since the Taylor type rule is easy to implement (since inflation and GDP data are readily available), it is the most logical place to start inquiry. This type of rule could also be implemented in fiscal policy for example, the paper this paper is build on however, finds passive fiscal policy to be optimal.

(35)

References

-V. V. Chari, Patrick J. Kehoe, New Keynesian Models: Not Yet Useful for Policy Analysis, Federal Reserve Bank of Minneapolis, July 2008 -M. Woodford, The Taylor Rule and Optimal Monetary Policy, Princeton University , 2001 -Robert Pollin, THE GREAT U.S. LIQUIDITY TRAP OF 2009-11: ARE WE STUCK PUSHING ON STRINGS?,
Review of Keynesian Economics 2012 -F. Villaverde and Ramírez, Risk Matters:
The Real Effects of Volatility Shocks, American Economic Review 101, 2011

-C. A. Sims. Probability models for monetary policy decisions. Manuscript, Princeton University, 2003

-S. Basu and B. Bundick, Endogenous Volatility at the Zero Lower Bound: Implications for Stabilization Policy, 2015

-De Graeve, F., The External Finance Premium and the Macroeconomy: US Post-WWII Evidence, Journal of Economic Dynamics and Control, 32(2008), 3415-3440.

-P.J.G. Vlaar, On the asymptotic distribution of impulse response functions with long run restrictions, DNB Staff Reports No. 22, 1998

-Taylor, John B., “Discretion versus Policy Rules in Practice,” Carnegie Rochester Confer- ence Series on Public Policy 39, December 1993, 195-214.

-Yun, Tack, “Nominal price rigidity, money supply endogeneity, and business cycles,” Journal of Monetary Economics 37, 1996, 345-370.

-Woodford, Michael, Interest and Prices: Foundations of a Theory of Monetary Policy, Princeton: Princeton University Press, 2003.

-Schmitt-Grohe, Stephanie and Martın Uribe, “Optimal Fiscal and Monetary Policy in a Medium Scale Macroeconomic Model,” Macroeconomics Annual 2005, Cambridge and London: MIT Press, 2006, 383-425.

-Smets Wouters, “SHOCKS AND FRICTIONS IN US BUSINESS CYCLES A BAYESIAN DSGE APPROACH” Economic review, 2007.

(36)

-Richard Clarida, Jordi Galí, and Mark Gertler “The Science of Monetary Policy: A New Keynesian Perspective” Journal of Economic Literature Vol. XXXVII (December 1999), pp. 1661–1707

- Finch, T “Incremental calculation of weighted mean and variance” University of Cambridge Computing Service, 2009

- Andreasen, Martin M., Jesus Fernandez-Villaverde, and Juan Rubio-Ramırez: “The Pruned State-Space System for Non-Linear DSGE Models: Theory and Empirical Applications,” NBER Working Paper, 2003

- Guerron-Quintana, Inoue “Impulse Response Matching Estimators for DSGE Models”, Federal Reserve Bank of Philadelphia, October 19, 2015 -CAROL ALEXANDER, Fabozzi “Moving Average Models for Volatility and Correlation, and Covariance Matrices” Ch8, volatility and correlation, ICMA Centre Business School, November 2, 2007 - Winitzki, Sergei "A handy approximation for the error function and its inverse" MIT, 6 February 2008.

(37)

Appendix, Code used in dynare

var psif_cur, psif_fut, psif_curp, psif_futp, kvar, invar, cvar, lamvar, hvar, rvar, wvar, qvar, uvar, piivar, mcvar, mvar, ptildvar, avar, x1var,

(38)

x2var, svar, gvar, zvar, lvar, tauvar, taudvar, valuevar, yema, ysema , alphapii, lysema, lsysema, chi_min, chi_max; %b1

varexo e_z e_g;

parameters sigpar, deltpar, betpar, etapar, thetpar, alphapar, smpar, sgpar, smfpar, sbpar, psipar, rho_g, rho_z,

taulbarr, zbarr, qbarr, piibarr, hbarr,ptildbarr, rbarr, mcbarr, kapabarr, kbarr, sbarr, ubarr, abarr, mbarr, gbarr, lbarr, nufparbarr, chibarr, wbarr, inbarr, cbarr, nuhparbarr, gamparrbarr, taustar, lstar, rstar, piistar, astar, alphar, alphay, gamma1par,

alph_a, alph_b cay , delt, chiminpre, chimaxpre, chi_med, kay, ray, aaprox, ppow1, ppow2, logvarysema, piiii; //calibrated parameters sigpar=2; deltpar = 1.1^(1/4)-1; betpar= 1.04^(-1/4); etapar=5; thetpar=0.3; alphapar=0.8;

(39)

smpar=0.17 * 4; sgpar=0.17; smfpar=2/3; sbpar=0.42*4; psipar = 0.09; rho_g=0.87; rho_z=0.8556;

alph_a=1/1000; //hoort bij yema alph_b=9/100; //hoort bij ysema cay = 1.50; delt = 0.0; chiminpre = 3.5802e-05; chi_med = 3.7538e-05; chimaxpre = 4.3226e-05; kay = 7; ray = 0.55; aaprox=0.147; ppow1=0.05; ppow2=0.95; logvarysema=0.0554; piiii=pi;

//deterministic steady state to define additional parameters taulbarr=0; zbarr=1; qbarr=1; piibarr=1.042^(1/4); hbarr=0.2; ptildbarr=((1-alphapar*piibarr^(etapar-1))/(1-alphapar))^(1/(1-etapar)); rbarr=piibarr/betpar;

mcbarr=(etapar-1) / etapar * (1-alphapar * betpar * piibarr^etapar ) / (1 - alphapar * betpar * piibarr^(etapar-1)) * ptildbarr;

taudbarr=-(sbpar + smpar/rbarr ) * (1- rbarr/piibarr) - smpar * (1-1/rbarr) + sgpar;

kapabarr=((1/betpar + deltpar -1) / ((1-taudbarr) * mcbarr * thetpar))^(1/(thetpar-1));

kbarr=kapabarr*hbarr;

sbarr=((1-alphapar) * ptildbarr^(-etapar)) / (1-alphapar * piibarr^etapar); ubarr = (1 / betpar + deltpar -1 - deltpar * taudbarr) / (1-taudbarr); abarr = ubarr*kbarr + hbarr*mcbarr * (1-thetpar) * kapabarr^thetpar; mbarr=smpar*abarr;

(40)

lbarr=sbpar*rbarr*abarr+mbarr;

nufparbarr = smfpar * mbarr / (mcbarr * (1-thetpar) * kapabarr^thetpar *hbarr- smfpar * mbarr * (1-1/rbarr));

chibarr = kapabarr^thetpar * hbarr - sbarr * abarr;

wbarr=mcbarr*zbarr*(1-thetpar)*(kapabarr^thetpar)/(1+nufparbarr*(1-1/rbarr));

inbarr=deltpar*kbarr; cbarr=abarr-inbarr-gbarr;

nuhparbarr = (mbarr-nufparbarr * wbarr *hbarr) / cbarr;

gamparrbarr = (1-hbarr) / cbarr * wbarr * (1-taudbarr) / (1+nuhparbarr * (1-1/rbarr));

//ramsey steady state constants, directly from SGU code ramsey_ss.f or type and run the ramsey model

taustar=0.056800953560735; lstar=0.401717591478016; rstar=1.009747553476407; piistar=0.999895179763840; astar=0.362663399048560;

//optimal rule parameters alphar=0.01; //alphapii=3; alphay=0.5; gamma1par=0.5; model; psif_cur=1-(psipar/2)*(invar/invar(-1) -1)*(invar/invar(-1) -1); //capital adjustment cost function for x=i_t/i_{t-1}

psif_fut=1-(psipar/2)*(invar(+1)/invar -1)*(invar(+1)/invar -1); //capital adjustment cost function for x=i_{t+1}/i_t

psif_curp=(-psipar/invar(-1))*(invar/invar(-1)-1); //derivative in i_t of psif_cur

psif_futp=psipar*(invar(+1)/(invar*invar))*(invar(+1)/invar-1); //derivative in i_t of psif_fut

kvar=(1-deltpar)*kvar(-1)+invar*psif_cur; //(7) capital evolution equation

(41)

cvar^(-sigpar)*(1-hvar)^(gamparrbarr*(1-sigpar))=lamvar*(1+nuhparbarr*(1-1/rvar)); //(8) + (28) modified FOC/consumption

gamparrbarr*cvar/(1-hvar)=wvar*(1-taudvar)/(1+nuhparbarr*(1-1/rvar));//(9)+(8)+(28) modified FOC/leisure

lamvar=lamvar*qvar*(psif_cur+invar*psif_curp)+betpar*lamvar(+1)*qvar(+1)*in var(+1)*psif_futp;//(11) foc/investment

lamvar*qvar=betpar*lamvar(+1)*( (1-taudvar(+1))*uvar(+1)+qvar(+1)*(1-deltpar)+deltpar*qvar(+1)*taudvar(+1) ); //(12) foc/capital, last term on RHS is to be changed removed with fiscal policy

lamvar=betpar*rvar*lamvar(+1)/piivar(+1); //(13)= (10) + (28) foc/money holding-->euler equation

mcvar*zvar*(1-thetpar)*((kvar(-1)/hvar)^thetpar)=wvar*(1+nufparbarr*(1-1/rvar)); // (23) firm demand for labor

mcvar*zvar*thetpar*((kvar(-1)/hvar)^(thetpar-1))=uvar; //(24) firm demand for capital

mvar=nufparbarr*wvar*hvar+nuhparbarr*cvar; //(27) real balances held by firms and households

alphapar*piivar^(etapar-1)+(1-alphapar)*ptildvar^(1-etapar)=1; //(29) aggregate price index st. Calvo pricing

x1var=ptildvar^(-1-etapar)*avar*mcvar+alphapar*betpar*(lamvar(+1)/lamvar)*(piivar(+1))^(etapar )*(ptildvar/ptildvar(+1))^(-1-etapar)*x1var(+1); //(30) 1st part recursive decomposition of (22), avar=aggregate output

x2var=ptildvar^(- etapar)*avar+alphapar*betpar*(lamvar(+1)/lamvar)*(piivar(+1))^(etapar-1)*(ptildvar/ptildvar(+1))^(-etapar)*x2var(+1); //(31) 2nd part recursive decopposition of (22)

x2var=(etapar/(etapar-1))*x1var; //(32) same as (22) but without infinite sums, foc max profit/prices

svar*avar=zvar*(kvar(-1)^thetpar)*(hvar^(1-thetpar))-chibarr; // (33)

avar=cvar+invar+gvar; //(34) aggregate resource equations

svar=(1-alphapar)*ptildvar^(-etapar)+alphapar*(piivar^etapar)*svar(-1); lvar=(rvar/piivar)*lvar(-1) + rvar*(gvar-tauvar)-mvar*(rvar-1); //(14) modified governement budget constraint

(42)

tauvar=taustar+gamma1par*(lvar(-1)-lstar); //(16) fiscal policy rule, passive if gamma1par is fixed

//Volatility estimators

yema = (1-alph_a)*yema(-1) + alph_a*avar;

ysema = (1-alph_b)*ysema(-1) + (alph_b)*(((avar)-yema)*((avar)-yema(-1)));

lysema= (1-alph_a)*lysema(-1) + (alph_a)*log(ysema);

//lognormal volatility distribution mm estimators

exp(lsysema) = (1-alph_a)*exp(lsysema(-1)) + (alph_a)*exp((log(ysema)-lysema)*(log(ysema)-lysema(-1))); [static] chi_min=chiminpre; [static] chi_max=chimaxpre; //Variance bounds chi_min=(1-alph_a)*chi_min(-1) +(alph_a)*exp(-1*sqrt(sqrt(((2/(piiii*aaprox))+(log(1-(2*ppow1-1)^2)/2))^2-(2/aaprox)*(log(1-(2*ppow1-1)^2)/2))-((2/(piiii*aaprox)) +(log(1-(2*ppow1-1)^2)/2)))*sqrt(2)*sqrt(lsysema)+lysema)^2); chi_max=(1-alph_a*10)*chi_max(-1) +(alph_a*exp(+1*sqrt(sqrt(((2/(piiii*aaprox))+(log(1-(2*ppow2-1)^2)/2))^2-(2/aaprox)*(log(1-(2*ppow2-1)^2)/2))-((2/(piiii*aaprox)) +(log(1-(2*ppow2-1)^2)/2)))*sqrt(2)*sqrt(lsysema)+lysema)^2); // inflation coefficient: alphapii = ((-1)*exp(2*chi_max*(chi_max+chi_min)^(-1)*kay) +exp(2*chi_min*(chi_max+chi_min)^(-1)*kay))^(-1)

(43)

*(exp(2*chi_max^ray*chi_min^(1+(-1)*ray)*(chi_max+chi_min)^(-1)*kay)+ exp(2*(chi_max+chi_min)^(-1)*kay*ysema))^(-1)*(cay*((-1) *exp(2*chi_max*(chi_max+chi_min)^(-1)*kay) +exp(2*chi_min*(chi_max+chi_min)^(-1)*kay)) *(exp(2*chi_max^ray*chi_min^(1+(-1)*ray)*(chi_max+chi_min)^(-1) *kay)+exp(2*(chi_max+chi_min)^(-1)*kay*ysema)) +delt*(2*exp(2*kay)+exp(2*chi_min^(1+(-1)*ray)*(chi_max+chi_min)^(-1) *(chi_max^ray+chi_min^ray)*kay)+exp((chi_max+chi_min)^(-1) *(2*chi_max*kay+2*chi_max^ray*chi_min^(1+(-1)*ray)*kay))+(-1) *exp(2*(chi_max+chi_min)^(-1)*kay*(chi_max+ysema))+(-1) *exp(2*(chi_max+chi_min)^(-1)*kay*(chi_min+ysema))+(-2) *exp((chi_max+chi_min)^(-1)*(2*chi_max^ray*chi_min^(1+(-1) *ray)*kay+2*kay*ysema)))); log(rvar/rstar)=alphar*log(rvar(1)/rstar)+alphapii*log(piivar/piistar)+alph ay*log(avar/astar); //(17) Taylor style rule,

//shocks

log(zvar) = rho_z*log(zvar(-1)) + e_z ; //(*1)

log(gvar) = (1-rho_g)*log(gbarr) + rho_g*log(gvar(-1)) + e_g; //(*2) //welfare valuevar=((cvar*(1-hvar)^gamparrbarr)^(1-sigpar)-1)/(1-sigpar)+betpar*valuevar(+1); end; ucl(1)=chimaxpre; lcl(1)=chiminpre; mcl(1)=chi_med; lmcl(1)=chi_med;

(44)

a1s = [0:cay/20:cay]; as=[a1s;a1s;a1s;a1s;a1s]; xxb = [1,1:5*length(as)-1]; for xx=1: length(as); aa = as(xx); xxa = xxb(xx); xxucl=ucl(xxa); xxlcl=lcl(xxa); xxmcl=mcl(xxa); xxvarlog=varlog(xxa); set_param_value('delt',aa); set_param_value('chimaxpre',xxucl); set_param_value('chiminpre',xxlcl); set_param_value('chi_med',xxmcl); set_param_value('logvarysema',xxvarlog); initval;

//some initial values come directly from the ramsey steady state tauvar=taustar; lvar=lstar; rvar=rstar; piivar=piistar; avar=astar; zvar=1; //(*1) qvar=1; //(11) gvar=gbarr; //(*2) taudvar=tauvar/avar; //(15)

uvar = (1 / betpar + deltpar -1 - deltpar * taudvar) / (1-taudvar); mvar=((rvar/piivar)*lvar + rvar*(gvar-tauvar)-lvar)/(rvar-1

(45)

ptildvar=((1-alphapar*piivar^(etapar-1))/(1-alphapar))^(1/(1-etapar));//(29) svar=((1-alphapar)*ptildvar^(-etapar))/(1-alphapar*(piivar^etapar)); x2var=(avar*(ptildvar)^(-etapar))*(1/(1-alphapar*betpar*piivar^(etapar-1))); // (31) x1var=((etapar-1)/etapar)*x2var; //(32) mcvar=(x1var-alphapar*betpar*(piivar)^(etapar)*x1var)/(ptildvar^(-1-etapar)*avar); //(30) hvar=((svar*avar+chibarr)/zvar)*((uvar/(mcvar*zvar*thetpar))^(thetpar/(1-thetpar))); //(24+33) kvar=((uvar/(mcvar*zvar*thetpar))^(1/(thetpar-1)))*hvar; //(24+33) wvar=mcvar*zvar*(1-thetpar)*((kvar/hvar)^thetpar)/(1+nufparbarr*(1-1/rvar)); // (23) invar=deltpar*kvar; //(7) cvar=avar-invar-gvar; //(34) lamvar=(cvar^(-sigpar)*(1-hvar)^(gamparrbarr*(1-sigpar)))/(1+nuhparbarr*(1-1/rvar)); //(8) valuevar=(((cvar*(1-hvar)^gamparrbarr)^(1-sigpar)-1)/(1-sigpar))/(1-betpar); yema = astar; ysema= xxmcl; lysema=log(chi_med); lsysema=xxvarlog; chi_min=xxlcl; chi_max=xxucl; alphapii = cay; psif_cur=1; psif_fut=1; psif_curp=0; psif_futp=0; end;

(46)

shocks;

var e_z; stderr 0.0064; var e_g; stderr 0.016; end;

steady(solve_algo=4);

check;

//extended_path(order=3,periods=3);

stoch_simul(order=3,pruning,irf=100,drop=100,periods=1000,nograph,noprint) alphapii rvar valuevar avar lamvar cvar invar wvar kvar hvar mcvar ysema;

my_results_1.mat(xx)=oo_; save('my_results_1.mat'); varlog(xx)=mean((log(ysema)-lysema).^2); ucl(xx)=quantile(ysema(1:length(ysema)),0.95); lcl(xx)=quantile(ysema(1:length(ysema)),0.05); mcl(xx)=quantile(ysema(1:length(ysema)),0.50); end

Referenties

GERELATEERDE DOCUMENTEN

As a first and explorative step, it characterised the network governance models for the Dutch and Australian cyber domain and tentatively examined both models’

The main findings of our analysis of research codes of conduct are that (1) the attribution of responsibility for good research practices to individual and institution is

This section allows respondents to highlight whether they were affected by skills mismatch in the Department , if they feel that they have the skills or

Series volumes follow the principle tracks or focus topics featured in each of the Society’s two annual conferences: IMAC, A Conference and Exposition on Structural Dynamics, and

While simulations starting with ordered proteins at every intermediate distance between the free protein and the fibril generally lead to a monotonic free energy profile,

(2014) , we  found that whereas changes in the congruency effect suggested that the visual modality was more sensitive to fatigue in the context of a temporal discrimination

van die reeks besluite aan om die Sap-pers, die Sap-liberaliste en die Torch Commando daaraan te hcrinner dat bulle geen ongunstige kommentaar op die konfcrensie se

Wel kan er op basis van eigen ervaring en waarnemingen gesteld worden dat de gemiddelde time-on-task hoger lag dan bij andere lessen over hetzelfde onderwerp en dat leerlingen door