• No results found

University of Groningen Faculty of Economics and Business DD MSc in International Economics and Business (IE&B) with Lund University, Lund Master Thesis

N/A
N/A
Protected

Academic year: 2021

Share "University of Groningen Faculty of Economics and Business DD MSc in International Economics and Business (IE&B) with Lund University, Lund Master Thesis"

Copied!
48
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

University of Groningen Faculty of Economics and Business

DD MSc in International Economics and Business (IE&B) with Lund University, Lund

Master Thesis

UNBUNDLING THE POST-2004 U.S. PRODUCTIVITY GROWTH SLOWDOWN

Author: Martine Beuckens Student number: 2523205

E-mail address: m.beuckens@student.rug.nl

Date: 20-06-2017

(2)

-2-

Abstract

The post-2004 slowdown in the US total factor productivity (TFP) growth has puzzled academics and policy makers. Several hypotheses have been explored under the standard growth accounting framework with, however, limited success. This framework rests on stringent assumptions such as constant returns to scale and factor inputs rigidity. I use an alternate framework that relaxes these assumptions with the goal to estimate a “pure” measure of TFP that allows me to raise the following three questions: First, how large is the difference between my “pure” measure of TFP and the one officially used? Second, does my “pure” measure report the same kind of slowdown as the official measure? Third, if the “pure” measure shows a slowdown, then is it transitory or permanent? To answer these questions I use a rich panel data set comprising 61 industries over the 1998-2013 period. While the answers for the first two questions are affirmative, the one for the last question points to a transitory slowdown.

(3)

-3-

Table of Contents

I. Introduction ... 4

II. Literature Review ... 6

II.A – Mismeasurement ... 6

II.B – Secular Stagnation and Macroeconomic Factors ... 7

II.C – The Solow Paradox Redux: Pessimists versus Optimists ... 8

II.D – Other Explanations for the Productivity Growth Slowdown ... 9

II.E – Main Take-Away Points and the Contribution of this Paper ... 9

III. Theoretical Framework ... 10

III.A – Multifactor Productivity: Alternate Approaches ... 10

III.B – Theory ... 11

IV. Quantitative Analysis ... 13

IV.A – The Source Data ... 13

IV.D – Descriptive Analysis ... 14

IV.C – Diagnostic Checks ... 16

IV.D – Econometric Specification and Estimation Method ... 17

IV.E Baseline Model and Its Variants ... 18

The Baseline Model ... 18

The Variant Model ... 20

Results ... 23

Robustness Checks ... 24

V. Conclusion ... 25

References ... 26

(4)

-4-

I. Introduction

Multifactor productivity growth1 in the U.S. has been evolving around 1.45%2 for almost a decade (1995-2004). However, a slowdown started in 2004, after which a lower growth rate was reported than the preceding years. The annual average growth rate in the following period (2005-2014) equals 0.57%, which is considerably lower. Growth of output per hour over the same period shows a similar pattern. While the average annual growth rate between 1995-2004 equaled 3.0%, the 2005-2014 period shows a lower growth rate, less than half the growth rate before: 1.41%. Many scholars have been puzzled about this striking slowdown and possible explanations from different perspectives have been put forward. As a result, there is a global debate ongoing about this topic, also called the productivity puzzle. Since it is puzzling that, despite recent innovations, labor productivity growth shows a sharp decline since 2004 (Manyika et al. 2017). Hypotheses have been ranging from technological advances that may not have the same properties to stir growth as prior generations of technologies to economically disruptive technologies with high prospects to raise productivity, including measurement errors (e.g. Byrne et al. 2016, Byrne et al. 2013, Gordon 2016b, Aghion et al. 2017 among others). Notwithstanding these attempts, a legitimate answer to the question what might have triggered this productivity growth slowdown is yet to be identified.

So far, researchers have tried to classify reasons behind the productivity growth slowdown using growth accounting, as developed by Solow (1957). Solow (1957) started with a nonparametric production function and combined this with a measure for productivity, which behaves as a Hicksian neutral shift parameter3. His theory is well-known and called the Solow Residual, which weighs growth in inputs by their share in the value of output. On the contrary, there are many doubts about this framework, in particular about the restrictions that are attached to the measure: perfect competition, full capacity and constant returns to scale.

Hulten (2001) acknowledges these critical views on total factor productivity (TFP), by pointing out problems with the measure mentioned. For example, the assumption of constant returns to scale is criticized, since it is not necessary to calculate TFP (Hulten, 1973). Similarly, the restriction of marginal cost pricing, which cannot be met when there is imperfect competition. In this case, the price will exceed the marginal cost and the TFP measure will be biased (Hall, 1988).

1 Multifactor productivity measures the efficiency with which factor inputs are employed in production. One way to measure is the growth output minus the combined growth of inputs. Labor productivity, or output per hour, measures the efficiency with which labor is employed in production. See Section III for more details.

2 Multifactor Productivity Trends 2017, BLS (private nonfarm business sector)

(5)

-5-

The traditional measure of TFP is not representative in the current world, since scale economies are present to a much larger extent than before the 2000s. With the globalization moving quickly forward with the integration of China and India into the world economy, access to new markets by U.S. firms has contributed to displace inefficient competitors. All the while allowing export-oriented firms to expand their production and, hence, to exploit scale economies that were not possible before the globalization era. Larger scale economies make the traditional measure of TFP less accurate as this measure should solely track efficiency of factor inputs. Scale economies and varying capacity due to rigidity of factor inputs generate returns that are not considered to be part of the efficiency of factor inputs in the traditional measure. All these considerations have long been debated in the 1970s and the 1980’s, received wide acceptance in the literature. However, for some reason, this discussion slightly disappeared in the academic and public discourses (see Hulten, 2001 for a nice review of the history behind the concept of TFP). Considering the productivity growth slowdown, the timing is appropriate to get back to this literature to revisit the productivity puzzle with the goal to gain some fresh insights.

With all of these developments as a backdrop, this paper seeks to address the following three questions: First, how large is the bias underlying the traditional TFP? In other words, after netting out the effect of scale economies and factor input adjustments, a the so-called “pure” (or adjusted) measure of TFP will be obtained and the question is how large is the difference between this measure and the one that is conventionally measured. The difference between the two is an indication of a “bias” and the question is whether this bias remained constant over time or worsened? Second, while there is a consensus in the literature that the conventional measure of TFP has deteriorated after 2004, the question is whether this movement is also visible when using the “pure” measure of TFP. So far, there has been no attempt that addressed this question. Finally, if hypothetically the “pure” measure of TFP reveals a different path after 2004 than the conventional measure, then the question is posed whether this change is temporary or permanent. The policy-relevance of these three questions is far-reaching. Fundamentally, the first discussion will be about whether the approach to measure the growth of the U.S is still accurate in the current world and if so – did the productivity slowdown happen, taking into account this newly constructed measure.

(6)

-6-

literature, this slowdown will be qualified by indicating that, on the basis of panel unit root tests, it appears as temporary, in other words, the drop in productivity growth is not long-lasting. The remainder of the paper is organized as follows; section II will give a compact overview of the most relevant literature on the topic of the productivity puzzle. Section III will introduce the theoretical framework used throughout the paper, whereas section IV performs a quantitative analysis, presenting the most important econometric outcomes. Section V concludes and mentions the limitations.

II. Literature Review

The productivity slowdown is a hotly debated topic nowadays and therefore, a distinct literature is available. Accordingly, the best way to review the available literature, is to divide the researchers into themes. There are four groups to be distinguished in this debate, these were already defined in reports on the productivity paradox by Manyika et al. (2017) and Andrews et al. (2016). The four categories are: mismeasurement; the Solow Paradox redux pessimists and optimists; secular stagnation and macroeconomic factors and other reasons. These will be discussed in the following sub-sections.

II.A – Mismeasurement

Searching for reasons behind the productivity slowdown, some writers came to the conclusion that there was a serious issue of mismeasurement in the construction of the measures for labor productivity and total factor productivity (Aeppel 2015, Feldstein 2015, Harzius and Dawsey 2015). According to those writers, the productivity measures poorly assess the fact that information technology (IT) is used among a broad part of all sectors. No econometrics was used in order to support the mismeasurement hypothesis by these authors.

Hulten and Nakamura (2017) also acknowledge the fact that GDP is not accurately measured since current innovations are affecting consumers directly and are not measured as a consequence. They explain the important difference between resource-saving innovations and output-saving innovations. Resource-saving innovations are defined as the usage of fewer resources in order to produce a given amount of output and output-saving innovations can be defined as expanded scope and efficiency in consumer choice and thus utility (Hulten and Nakamura, 2017). Examples of both types of innovations are for instance a robot which decreases the number of employees and smartphones respectively. The former type of innovation is measured as TFP in the Solow growth model, whereas latter is not included. Hulten and Nakamura (2017) establish this add-on and as a result, measures of living standard growth rise.

(7)

-7-

networks and platforms with media for downloading and streaming purposes. According to Syverson (2016) there are two forms of mismeasurement. Firstly, there is the smaller share of utility embodied in prices, if so, the measure of output growth would slow down regardless of the growth of the total surplus. On the other hand, there is the quantitative plausibility of the mismeasurement hypothesis, meaning that the price deflators rise to fast or fall to slowly and quantity growth could be understated. The next step is to compute the missing output due to the productivity slowdown. Then, Syverson (2016) identifies four patterns that challenge the hypothesis: this productivity slowdown is not unique to the U.S., the consumer surplus of internet is far from the level of missing output, the productivity slowdown is accounted for by particular industries and GDI is not equal to GDP, meaning that workers are being paid to produce goods for free or to be sold at a steep discount.

Syverson (2016) also finds that the productivity slowdown is not correlated with IT production or use. The conclusion of this paper is that the empirical burdens are too heavy and therefore, the hypothesis would not hold. Byrne et al. (2016) performed a similar exercise on the mismeasurement hypothesis. The slowdown in TFP growth is not likely to be caused by a growing share of low-productivity sectors (Byrne et al. 2016). Byrne et al. (2016) try to assess the hypothesis by developing adjusted measures, resulting in worsening the paradox of the productivity puzzle. Considerable evidence is found for the mismeasurement hypothesis, however, it is not found that biases worsened since early 2000s (Byrne et al. 2016). They point to different reasons for the productivity slowdown, mainly underlying macroeconomic factors, since

the U.S. is not the only country which experienced the productivity growth slowdown. These macroeconomic factors will be explained in the next section.

II.B – Secular Stagnation and Macroeconomic Factors

Other researchers try to identify reasons behind the productivity growth slowdown in macroeconomic factors. One of the most used phrase in combination with the productivity slowdown is the phenomenon of secular stagnation. Secular stagnation is defined by Alvin Hansen (1938) in the late 1930s by introducing his secular stagnation theory. Secular stagnation implies that there is an imbalance between the increasing propensity to save and the decreasing propensity to invest. Excessive saving will lower demand, followed by growth reduction, inflation and lower interest rates (Hansen, 1938). Former chief economist at the World Bank, Lawrence H. Summers tries to compare the theory with the post-2004 productivity slowdown in the U.S. Significant growth can be achieved, according to the theory, by excessive levels of borrowing and unsustainable investment (Summers, 2016). According to Summers, expansionary fiscal policy should work to overcome this secular stagnation while Teulings and Baldwin (2014) argue that overcoming a secular stagnation is not possible with the current macroeconomic toolkit.

(8)

-8-

another angle, they relate declining measures of business dynamism and resource reallocation with the slowing down of productivity growth since mid-2000s. They do so by examining differences in productivity growth using firm-level data and a decomposition of aggregate productivity. Decker et al. (2017) find that flawed growth in allocative efficiency can be responsible for a large part of the productivity slowdown. However, admitting the difficulty of drawing clear distinctions between allocative efficiency and technological stagnation mechanisms. Next to that, the decline of business dynamism is carefully linked to the productivity growth slowdown. All topics mentioned above are closely related to the pace of technological change and technological progress itself. There are different opinions about technological change, also called the IT revolution. These opinions will be discussed in the next section.

II.C – The Solow Paradox Redux: Pessimists versus Optimists

Well-known economist Robert M. Solow famously said in 1987: ‘We can see computers everywhere except in the productivity statistics.’ This is often referred to as the Solow paradox, back then Solow alluded to the period of slow productivity growth in the period 1974-1994. The redux refers to the fact that similar low patterns of productivity growth are experienced after 2004 during the so-called IT-revolution of information technologies. In searching for reasons, economists debate about whether the IT revolution has ended or not, which divides them into two groups: pessimists and optimists.

Techno-pessimists

Techno-pessimists believe the IT revolution is coming to its end and the fact that low productivity growth rates are measured nowadays is normal, since the period of high productivity growth between 1995-2004 is seen as extraordinary (Gordon 2016a, Byrne et al. 2016). According to Gordon (2016b) the IT revolution, labeled as the third industrial revolution, has not generated the same widespread effects as the second industrial revolution. He points to the fact that the life-changing innovations between 1870 and 1970 cannot be repeated. His view of the future is that there will even be times in which productivity growth will slow down further due to six headwinds: retiring baby boomers, lowered educational attainment, rising income inequality, outsourcing of jobs, energy and environmental legislations and high household and government debt (Gordon, 2012). These six headwinds will lead to lower output and productivity growth. Another techno-pessimist is Fernald (2014), in his article, he compares the productivity slowdown after 2004 with the 1973-1995 pace of productivity growth by growth accounting and Bai-Perron tests by using a multi-sector neoclassical growth model. He concludes that TFP growth already slowed before the Great Recession because this slowdown is not limited to the so-called “bubble-sectors” that eventually lead to the bubble to burst (Fernald, 2014).

(9)

-9-

The techno-optimists disagree with the views explained above and believe that the IT revolution is not over yet and that productivity growth will revive as it did in the period prior to 1995. Byrne et al. (2013) answer the question whether the IT revolution is over, by applying growth accounting and decomposing output per hour and multifactor productivity (MFP), they conclude with the answer that it is not. The rapid pace of innovations in the semiconductor industry makes them believe that there is still hope for more inventions. Baily et al. (2013) also offer an optimistic perspective by looking at data and concluding that there are signs of ongoing innovation and they believe that there will be an Energy Revolution. Their view on the slowdown is the fact that particular sectors such as education, health care, infrastructure and government have not been able to join the productivity growth, also acknowledged by Manyika et al. (2017). The reason for this is a lack of incentives for change and also institutional rigidity (Baily et al. 2013). Two other key players in the field of the Solow Paradox Redux, on the optimistic side, are Brynjolfsson and McAfee (2014). They agree with the views expressed here in the sense that productivity growth is not slowing permanently. Mokyr (2014) is also optimistic about the productivity slowdown since according to him, technological progress is still winning from the alternatives. However, some academics are worried about the pace of innovation due to the information gap, as will be explained in the following section.

II.D – Other Explanations for the Productivity Growth Slowdown

Some of the explanations do not fit into the groups mentioned above and therefore, those will be discussed in this section. Andrews et al. (2017) and Baily and Montalbano (2016) mention that the technological frontier is still moving out, albeit at a slower pace. However, they admit that diffusion of technologies became more difficult, leading to a divergence of laggards from the frontier, also acknowledged by Manyika et al. (2017). This can be seen as an information gap between digital and physical industries (Mandel and Swanson, 2017). Other explanations put forward by Manyika et al. (2017) reach from the fact that it is a numerator problem, concerning the output and there are only a few sectors rising growth. Also mentioned by Manyika et al. (2017) and by Baily and Montalbano (2016) is the weakness in capital formation, which leads to a lower level of investment.

II.E – Main Take-Away Points and the Contribution of this Paper

The productivity growth slowdown is a fact, although academics are still puzzled since there is not a viable answer to the question why it occurred. Several possible hypotheses have been put forward. However, most approaches are nonparametric in the sense that econometric analysis is not used to support claims about the productivity growth slowdown. On the contrary, this paper will employ an econometric specification to establish productivity growth measures, hence a parametric approach is used here.

(10)

-10-

an adjusted measure to analyze whether this measure deviates from the traditional measure, unit root tests will be performed in order to determine whether the slowdown is permanent or temporary. The primary purpose of this paper is to carry out a pure measure of multifactor productivity growth, under less restrictive assumptions, and to analyze whether this adjusted measure will show a similar pattern as the widely used measure. The hypothesis is that the newly constructed measure deviates from the traditional measure and a different pattern is visible, since the restrictive assumptions have been removed in order to simulate a measure that is not subject to unrealistic assumptions, as mentioned by Hulten (2001) and Morrison (1999). Morrison (1999) developed a theory, allowing for these kind of analyses, as will become clear in the next section.

III. Theoretical Framework

This section will give an overview of the theoretical framework used throughout the paper. First, some background information will be provided about multifactor productivity (MFP), which is equivalent to total factor productivity (TFP) and those terms are used interchangeably. Subsection III.B explains the theory behind the framework.

III.A – Multifactor Productivity: Alternate Approaches

Multifactor productivity can be defined as the output per unit input, multifactor productivity growth is then the measure for efficiency of the utilization of factor inputs. According to Morrison (1999), multifactor productivity growth can be measured in two ways: primal and dual, or output-side and cost-side respectively. The primal approach is based on the production function: YY X t( , ) , whereas the dual approach is based on the long-run cost function:

( , , )

CC W Y t , where Y is output,

C

is total cost, X is a vector of outputs,

W

is the

corresponding vector of inputs and tis the time index, representing technology. The two different approaches mostly present different outcomes, depending on the underlying assumptions. The primal approach measures multifactor productivity as growth of output not attributable to the growth of inputs. On the contrary, the dual measure of productivity accounts for the change in costs not attributable to the changes in output and factor input prices. The two measures feature an identical outcome when there is constant returns to scale (Morrison, 1999) and perfect competition, defined byP YYC,where PY is the price of output. Next to those two assumptions, it is also assumed that there is immediate adjustment of factor inputs. When these hypotheses are justified, multifactor productivity can be regarded as a trustworthy indicator of technical change.

(11)

-11-

III.B – Theory

The measure for multifactor productivity growth using the dual approach can be derived by taking the derivative of elasticity with respect to tfrom the long-run cost function defined above:

( , , )

CC W Y t . Then the following formula is obtained:

, , ln M j j j C t j K L j w x w C C Y t C YC w       

(1)

The outcome of this elasticity is similar to the one obtained by using the primal approach and by deriving the elasticity with respect to t from the output functionYY X t( , ). The elasticity,Y t, ,

can be derived from (1) after substitution ln j j j j

j j j w x w x d C C dt C C w x          

resulting from the

definition of the total cost function: j j

j

C

w x :

(2)

Nevertheless, the relationships above only hold if and only if P YYC, which is achieved when none of the following restrictions is violated: constant returns, which excludes any possibility to generate returns from technological or supply characteristics because economies of scale are not possible; immediate adjustment of factor inputs, implying that these inputs are always operating at full capacity and no returns feasible from the variation of their utilization; perfect competition, which cancels out the returns gained by exploiting market power.

When the assumption of constant returns to scale is satisfied, the total cost is proportional to the volume of output: C  Y c W t( , ), implying c C Y

c C Y, as shown in (1). When this restriction is violated, a deviation between average and marginal costs will occur, consequently, there will be an identity between output price PY and the average costs,C C

Y

 . Accordingly, the measure of multifactor productivity growth should net out the effects of efficient firms at a larger level of output regardless of any consideration related to technical change. Therefore, the expression

C Y CY should be changed to C Y, C Y C Y , where , ln ln C Y C Y   

 represents the inverse of constant

returns to scale; returns to scale is expressed as , 1

C Y

 . If  C Y, 1, constant returns to scale are

(12)

-12-

represents perfect competition, because if

MC

AC

, there is no deviation. However,

MC

AC

can be realized due to scale economies. Netting out the effect of constant returns to scale will change the formula for multifactor productivity growth, which will change to the following formula:

(3) Next to correcting for scale economies, the MFP growth measure should also be amended for the fact that the assumption of full capacity is violated. Full capacity utilization would imply that machines are working continuously, which is not realistic. The problem here arises from the fact that the fixed inputs reflect a rental price, which is not reflecting the true value of their marginal product. The best way to value this rental price in a reliable way, is to use the internal valuation of a firm, also known as the shadow price and calculate the cost consequences of being away from capacity input. By doing so, two new cost functions will be constructed; these cost functions will consist of one quasi-fixed factor input, capital, since ( )G  is a short-run cost function. The total cost: CG W K Y t( , , , )W KK and the shadow total cost: CG W K Y t( , , , )Z KK .

W represents the vector of prices of the variable inputs, labor in this case whereas WK is the

rental price of capital. The shadow value of capital ZKis derived as: G ZK K

 

 . These two

measures of the cost function can be used in order to calculate the capital utilization:CU C C

 .

When ZKWK the marginal benefit obtained by an increase in one unit of capital is less than the rental cost, consequently, there will be underutilization of capital:CU1. If it is the other way around: CU 1 then the firms operates in excess capacity.

The imputation of quasi-fixed capital has several consequences for the outcome of the MFP growth measure. Firstly, there will exist a subequilibrium changing the identity

Y

P YCtoP YY  C CU    C C Y, C C . Secondly, now not only has the formula for MFP

growth to allow for the fact that capital is quasi fixed and the shadow valuation needs to be taken into account but this fixity opens the door for scale economies, such that:   C Y, 1 C K, ,

where , ln ( ) ln K K C K W Z K C K C     

 . By including these changes in the formula for multifactor

productivity will then become:

, , , , ( ) ln M j j j fixity K K K K C T C Y C t C K j L j K w x w W K w W Z K C C Y K Y K t C YC w C w C K Y K              

(4)

Up until now, the elasticity of cost (C Y, ) has been used to take into account the fact that there is no constant returns to scale in (3). However, in (4) it has also been used since:

(13)

-13-

, 1 ,

C Y C K CUc

     . This link is important in establishing a formula for MFP growth which

captures both the effects of scale economies and capital utilization. To do so, the elasticity of cost, C Y, ,should be decomposed into the long-run elasticity of cost and capital utilization. So, the definition of the elasticity changes to:   C Y, C YL, CU , with ,

L C Y C Y Y MC Y C C                 ,

which is the cost elasticity at the steady state value of quasi-fixed inputZK, where MC is the

short-run marginal cost. The scale measure defined above, LC Y, , can also be used to study the relationship between marginal costs and average costs C

Y

   

 . Using the definition of the long-run

cost elasticity, equation (4) can be changed and both scale economies and capital utilization are taken into account in the follow measure of multifactor productivity growth:

(5)

The fact that both restrictions have been taken into account, is represented by the superscript “All”.

IV. Quantitative Analysis

This section will be devoted to the quantitative analysis of the dataset, firstly the dataset used for the analysis will be mentioned and explained more in detail. This will be followed by a descriptive analysis and several diagnostic checks. After these three subsections, the econometric analysis starts off with a baseline regression, complemented by kernel distributions and tests for change regarding the results. A robustness check will be performed using a model with a short-run cost function, in which capital is quasi-fixed. A similar analysis will follow, by using kernel distributions and tests for change in the trend.

IV.A – The Source Data

This paper will use the U.S. KLEMS dataset, provided by the Bureau of Economic Analysis/Bureau of Labor Statistics, which is an integrated industry-level production account, released in June 2015. The dataset is comprised of 63 different industries of the U.S. economy and is covering the years 1998-2013. Furthermore, this dataset complies with the international guidelines according to the OECD productivity manual (2001) together with the 2008 changes to the System of National Accounts. The dataset contains information on gross output and value added as well as on the corresponding inputs: labor, capital and intermediate goods. Each of these groups above is divided in more groups. However, in the analysis, only aggregated data on labor, capital and intermediate goods will be used and no distinction will be made within the groupings. One adjustment was made to this rich panel dataset, government sectors State and Local and Federal were omitted in order to fully focus on the market economy.

(14)

-14-

IV.D – Descriptive Analysis

The panel dataset used in the analysis is of extensive size and contains a lot of information, the dataset is summarized in the supplement (pp. 2-8), provided with the thesis. One of the tables contains information over the period 1998-2013, the second one over the period 1998-2003 and the third about 2004-2013. This division was chosen since the productivity growth slowdown started in 2004 and therefore, the data could differ between these two periods. The data is aggregated in table 1, from this table it can be concluded that when it comes to growth rates, the period 1998-2003 shows higher numbers than the period 2004-2013. The shares of capital, labor and intermediates have not been changing much over the years. Domar4 weights are used to construct the table below.

Table 1 – Summary Statistics (%)

C C Y Y K K w w L L w w M M w w K S SL SM 1998-2013 7.05 2.94 4.84 4.94 4.62 26.40 32.22 41.38 1998-2003 6.50 3.83 4.89 6.04 3.25 25.14 33.28 41.59 2004-2013 5.89 1.79 3.31 4.96 4.36 27.26 31.56 41.18 Notes: Where: C

C is growth rate of total cost, Y

Y the growth rate of output, K K w

w the growth rate of cost of

capital, L L w

w the growth rate of the cost of labor, M M w

w the growth rate of the cost of intermediate goods,

K

S ,SL,SMrepresent the share of capital, labor and intermediate goods in the cost function respectively.

The fact that the number over the whole period does not lie in between the numbers shown per period, could be caused by the change in the weights of the industries over time.

Another way to show how the data is distributed, is to construct Kernel density plots, also known as the Parzen-Rosenblatt window method5. It is a nonparametric approach, allowing for great flexibility (Zambom and Dias, 2012), used to model the underlying structure of the data. With the Kernel density plots, constructed for the two periods separately, the difference in the data distribution of the two periods can be studied in more detail. These plots have been constructed for the traditional multifactor productivity growth measure, the elasticity of scale, the elasticity of cost with respect to capital and the measure of capital utilization, as defined in the theoretical framework in the preceding section.

The first graph (1.A), showing the distribution of multifactor productivity growth for the two time periods, reveals an interesting story, in compliance with the findings in the literature

4 Domar weights are often used in KLEMS datasets, and are constructed by:

1.. i i i i n Y weight VA  

 . The weight of the industry is determined by the output of that industry divided by aggregate value added. The weights do not sum up to one, reflecting the fact that integration and aggregation exist (Domar, 1961).

(15)

-15-

discussed in section II: a drop in productivity growth after 2004. Whereas the mean in the first period revolved around 1%, the second period shows a mean near to zero. The graph on scale economies (1.B) shows a different picture, the kurtosis is higher for the first period, implying a lower level of density. Nevertheless, the means for both periods evolve around 1.1%. Graph 1.C shows a similar picture as the second one, a difference in kurtosis, so a lower level of density whereas the means do not change extensively between the two periods. The last graph (1.D), about capital utilization also reveals an interesting fact: before 2004 the economy was operating at full capacity, maybe a little in excess of capacity. However, this clearly changed after 2004, when the levels of capital utilization dropped, from slightly above 1 to slightly below 1, implying that the economy was operating under capacity.

Figure 1 – Kernel Density Plots

0 .5 1 1 .5 2 D e n si ty -2 -1 0 1 2 Multifactor Productivity (%)

Density MFP before 2004 Density MFP after 2004

1.A - Multifactor Productivity (%)

0 5 10 15 D e n si ty .8 1 1.2 1.4 1.6 1.8

Inverse Scale Elasticity

Density Scale Economies before 2004 Density Scale Economies after 2004

(16)

-16- 0 5 10 15 20 25 D e n si ty -.2 0 .2 .4 .6

Cost elasticity with respect to capital

Density cost elasticity with respect to K before 2004 Density cost elasticity with respect to K after 2004

1.C - Cost Elasticity with respect to Capital

0 1 2 3 4 5 D e n si ty 0 .5 1 1.5 2 Capital Utilization

Density Capital Utilization before 2004 Density Capital Utilization after 2004

1.D - Capital Utilization

IV.C – Diagnostic Checks

Some diagnostic checks have been performed to verify that the data is of good quality for econometric estimations. Firstly, a plot has been made in order to assess the normality of the residuals, the kurtosis is rather high, 11, 6 and 7 for the three plots (appendix figure A.1) of residuals respectively. Since the dataset is large, around 976 observations, it can be assumed that the errors follow a normal distribution with mean zero. Another check has been performed to make sure there is no presence of heteroskedasticity in the model. Along the model is based on log which mitigates the presence of heteroscedasticity. Nonetheless, the model has been tasted for the presence of this issue. Heteroskedasticity can occur when there is an independent variable, typically output, with which the variance of the residuals increases.

(17)

-17-

However, the plots do not show any signs of heteroskedasticity (appendix figure A.2). Finally, the variables included in this model report a trend which may them possess a unit root. But given that the dependent and explanatory variations possess a unit root then this guarantees the presence of a long-term cointegrated relationship, suggesting that the parameter estimates are less likely to be subject to a spurious correlation.

IV.D – Econometric Specification and Estimation Method

The cost function CC W Y t( , , )can be estimated in several ways. One way of estimating this type of cost function, mentioned by Morrison (1999), Hulten (2001) and Berndt (1991) is the

translog functional form of the cost function. This form has been developed by Christensen, Jorgenson, and Lau (1973). There are several important benefits of using the translog relation according to Hulten (2001): it avoids the need to impose the marginal productivity conditions, it gives a full representation of the technology and the measure can be adopted to several assumptions, e.g. nonconstant returns. Morrison (1999) acknowledges the fact that the flexible functional forms, of which the translog is an example of, provide a second order differential approximation, as shown by Diewert (1974) among others, so it offers a close approximation to the true data.

The translog cost function assumes instantaneous adjustment of all inputs. The function is an extension of the Cobb-Douglas functional form: whereas the first order log-linear form is restrictive and all elasticities are equal to one, the second order form can relax some of the assumptions and is able to calculate elasticities6. The translog function is homogeneous of degree one in prices7, but not necessarily in output (Morrison, 1999). In this particular case, the cost function is defined as:

(6)

Since one of the constraints is homogeneity, the variables are divided by the price of inputs and

i i M w v w

 , Dhrepresents the industry dummies, whereas subscript h indicates the industry.

6

A Cobb-Douglas First order approximation looks like: ln 0( h) Yln iln i t i M C D Y v t w                with i( , )K L . 7

This implies that if the independent variable(s) increase with a certain amount, the dependent variable will increase with the exact same amount.

(18)

-18-

In the function above: ,i j( , )K L , using the derivative and Shephard’s Lemma8 the share equations can be constructed as shown in (7):

(7) Where ih ih ih h w x C

   and since shares must sum to unity, the share of intermediate inputs is the

residual, such that: Mh 1 ih i

  

 .

The system of equation (6) together with two share equations as defined in (7), should satisfy the following conditions: the cost function should be concave in input prices and nondecreasing in output.

The estimation method used to estimate the system of equations, is the Seemingly Unrelated Regression (SUREG) method, developed by Zellner (1962). The SUREG estimates the

1

n equations, since the third equation is omitted because of imposing homogeneity. With the SUREG method, it is also possible to impose symmetry constraints on the parameters of the three equations, which makes this method very attractive for the model to be estimated in this paper. This method also offers the ability to estimate the measures of the coefficients in a more efficient way than when a method is used in which equation-by-equation estimation is performed (Zellner, 1962). In the SUREG method, the correlations among the errors from the different equations are used in order to improve the results. Adding the isure option, will lead to an iteration, which converges until the maximum likelihood of the results is reached.

IV.E Baseline Model and Its Variants The Baseline Model

The baseline model that will be estimated, is equation (6) together with two share equations (7), since ,i j( , )K L . The parameters from the two share equations should be similar to those from the cost function, therefore, several constraints are imposed. For the baseline, the number of constraints equals 10, since there are 10 parameters that should be equal. In the estimation,

1

n industry dummies are included. The regression results can be found in Table 2 below, since it is a large table, the effects of the dummies are not visible in this figure, but can be found in appendix figure A.3. Nonetheless, these dummies are included in the regression.

The overall significance of the parameters is satisfactory at the 1% level, also the r-squared shows a high number, implying that the model fit is good. Next to that, the imposed restrictions have been satisfied, since the numbers equal each other. When studying the whole

8

Shepard’s Lemma is used to determine the cost minimizing demand for input, provided that the indifference curves of the cost function are convex.

(19)

-19-

table, with the dummies included (as shown in the appendix), then one could conclude that also these dummies are mostly significant at the 1% level, therefore, a lot of heterogeneity is present within the panel dataset. The regression results of this baseline regression can be used to calculate the measure of multifactor productivity growth by taking the derivative with respect to t as follows:

(8) In the regression, the dependent variables should be interpreted as follows: ln

M C w         is the

natural logarithm of the relative total cost and

Land

Krepresent the labor share and the capital share in the cost function respectively. The independent variables should be interpreted as follows: lnYhis the natural logarithm of the index of gross output, lnvLh andlnvKh the natural logarithm of the relative price of labor and capital respectively (relative to cost of intermediate inputs) and t represents the time index. Everything is in indices, with base year 1998.

The parameters should correspond with the ones from the regression output. Furthermore, the standard errors should be calculated manually, as the formulas in figure A.5 in the appendix show. The corresponding standard errors shown in the file are those of the averages per industry.

The next step is to take the derivative with respect to Y in order to determine the impact of the scale economies in the multifactor productivity measure. This derivative is calculated in a similar way as the multifactor productivity growth formula:

(9) Also the elasticity of scale is calculated using the parameters from the baseline regression. However, in order to calculate the elasticities regarding the fixity of capital and the complete measure, a variant regression should be calculated, in which capital is quasi-fixed because full capacity is not assumed in the variant model, as will be elaborated in more detail in the following section.

, 2 ln ln ln

C t t tttYt YLt vLKt vK

, 2 ln ln ln

(20)

-20-

Table 2 – Baseline Cost Function and Factor Shares (Equations 6 & 7)

(1) (2) (3) Variables ln M C

w

        L

K lnYh 1.896*** -0.0993*** -0.0506*** (0.155) (0.00593) (0.00681) lnvLh 0.684*** 0.0995*** -0.0183*** (0.0271) (0.00685) (0.00241) lnvKh 0.268*** -0.0183*** 0.0506*** (0.0319) (0.00241) (0.00245) t -0.00216 -0.00128*** 0.00332*** (0.0116) (0.000184) (0.000232) 2 (lnYh) -0.103*** (0.0166) 2 (lnvLh) 0.0995*** (0.00685) 2 (lnvKh) 0.0506*** (0.00245) 2 t 0.000524*** (0.000110) lnYhlnvLh -0.0993*** (0.00593) lnYhlnvKh -0.0506*** (0.00681) lnY th -0.00249 (0.00260) lnvLhlnvKh -0.0183*** (0.00241) lnvLht -0.00128*** (0.000184) lnvKht 0.00332*** (0.000232) Constant -6.529*** 0.684*** 0.268*** (0.384) (0.0271) (0.0319)

Fixed Effects Yes Yes Yes

Observations 976 976 976

R-squared 0.948 0.972 0.943

Notes: Standard errors in parentheses: *** p<0.01, ** p<0.05, * p<0.1. Abbreviations and explanations can be found on pp. 19

(21)

-21-

The Variant Model

Since capital is now quasi-fixed and will enter the cost function as K instead of wK , the econometric specification defined in (6), will be amended to:

(10)

Together with the following share equation:

(11)

The corresponding share equation now only concerns labor, since it is not possible, using Shephard’s Lemma (Morrison, 1999), to compute one for capital and making restrictive

assumptions for the parameters since those do not comply with each other. Therefore, the number of restrictions has been reduced to five. The output of the variant regression is shown below. Similarly, to the previous regression table, dummies are left out of the table but not out of the regression, to avoid confusion.

The same definitions apply as with the baseline regression, however, since capital is quasi-fixed in this model and it was not in the baseline model, lnKhhas been added, representing the natural logarithm of the index of capital quantity.

(22)

-22-

Table 3 – Variant Cost Function and Factor Shares (Equations 10 & 11)

(1) (2) Variables ln M C

w

        L

lnYh 2.269*** -0.130*** (0.202) (0.00616) lnvLh 0.631*** 0.0929*** (0.0344) (0.00721) lnKh 0.961*** 0.0434*** (0.304) (0.00802) t -0.00162 -0.00173*** (0.0136) (0.000217) 2 (lnYh) 0.0124 (0.0225) 2 (lnvLh) 0.0929*** (0.00721) 2 (lnKh) 0.0577 (0.0481) 2 t 0.000681*** (0.000113) lnYhlnvLh -0.130*** (0.00616) lnYlnKh -0.299*** (0.0568) lnY th -0.00211 (0.00293) lnvLhlnKh 0.0434*** (0.00802) lnvLht -0.00173*** (0.000217) lnK th -0.00120 (0.00318) Constant -10.01*** 0.631*** (0.637) (0.0344)

Fixed Effects Yes Yes

Observations 976 976

R-squared 0.953 0.972

Notes: Standard errors in parentheses: *** p<0.01, ** p<0.05, * p<0.1. Abbreviations and explanations can be found on pp. 19,

(23)

-23-

Results

The results of the elasticities, the non-adjusted and the adjusted versions with the corresponding biases, can be found below in the aggregate table. This table is constructed using Domar’s weight, as explained before. The more detailed results of the elasticities per industry and per year are listed in the supplement (pp. 13-37).

From the first part of Table 4 (results per year), the results regarding the traditional measure in the second column, C t, , is showing a downward trend whereas the other measures do not follow this same pattern necessarily. The adjusted measures do not only show a downward trend, there exists some variation. The biases also show different patterns, there are no straight lines, each bias spikes around 2008-2009, meaning that the deviation from the traditional measure was much larger than in other years. When studying the second part of Table 4 (average annual growth rates in subperiods), it becomes clear that the traditional measure supports the widely-held view that the multifactor productivity growth slowed after 2004. The difference between the two results is quite extensive: about 1.4 % lower growth in the period after 2004 than in the period before. The scale measure also reports a lower number for the second period, similar to the one for the traditional measure. The difference between the two numbers is even bigger when there is capital fixity, a difference of almost 1.75%. Lastly, also the adjusted measure shows the decline: a difference of 1.6%. Therefore, the broadly-held view is also supported under less restrictive assumptions.

It is clear that the results report a slowdown after 2004 but it is not all clear that this deviation is transitory or permanent. In recent years the econometrics literature has proposed a number of tests for unit roots in panel data. In this case, a specific test is needed which is valid when the number T of time periods (years in our case) is small and the number N of individuals (industries in our case) is large, which guarantee the property of consistency when T is fixed and N is large. One of these tests is the Harris-Tzavalis test which is based on bias-adjusted least squares dummy variable or within estimation and therefore allows non-normality but not heteroscedasticity.

(24)

-24-

Table 4 – Various Elasticities , C t  scale, C t   Bias , fixity C t  Bias , All C t   Bias 1999 2.118 1.133 -0.984 2.64 0.522 1.618 -0.499 2000 1.985 0.77 -1.216 2.256 0.271 1.46 -0.525 2001 1.789 1.674 -0.116 1.495 -0.295 1.797 0.007 2002 1.57 1.363 -0.208 1.533 -0.037 1.424 -0.147 2003 1.366 0.766 -0.6 1.685 0.32 0.964 -0.402 2004 1.192 0.195 -0.997 1.588 0.395 0.604 -0.588 2005 1.043 -0.21 -1.253 1.337 0.293 0.17 -0.873 2006 0.869 0.02 -0.849 0.719 -0.151 0.083 -0.787 2007 0.683 -0.108 -0.791 0.474 -0.209 0.079 -0.604 2008 0.523 0.641 0.118 -0.165 -0.688 0.546 0.023 2009 0.273 1.397 1.124 -1.497 -1.77 1.578 1.306 2010 0.075 -0.978 -1.053 0.45 0.376 -0.742 -0.817 2011 -0.102 -0.964 -0.861 -0.112 -0.01 -0.817 -0.715 2012 -0.29 -1.546 -1.255 -0.348 -0.058 -1.506 -1.216 2013 -0.461 -1.479 -1.018 -0.71 -0.249 -1.613 -1.152

Average Annual Growth Rates

1999-2013 0.842** 0.178** -0.664 0.756 -0.086 0.376 -0.466

1999-2003 1.766** 1.141** -0.625 1.922 0.156 1.453 -0.313

2004-2013 0.380** -0.303** -0.684 0.173** -0.207 -0.162** -0.542

Note: Elasticities are in percentage while the bias refers to percentage point difference; ** indicates the 5% significance level. Where: C t, refers to the traditional measure of TFP,  C tscale, the measure adjusted for scale economies, C tfixity, the

measure adjusted for factor rigidities, and C tAll,the pure measure.

Robustness Checks

(25)

-25-

V. Conclusion and Implications

The aim of this paper was to unbundle the traditional multifactor productivity growth measure, which is widely used and often referred to. This measure has experienced a decline after 2004, the reason behind this remains, despite numerous research efforts, still a puzzle to many academics. This paper tried to construct a measure, which did not follow the normal restrictive assumptions as the traditional one. This measure was composed using a translog cost function, which is a second order approximation. The construction of the adjusted measures for multifactor productivity growth succeeded, however when plotting the measures over time, a similar downward trend after 2004 was visible.

Nonetheless, there is an important difference between the traditional and the adjusted measures: the fact that the drop in productivity is regarded as permanently using the traditional measure whereas it is regarded as temporary using the adjusted measure. This adjusted measure does control for the existence of scale economies and capital fixity, which is closer to the reality due to globalization. The adjusted measure shows a brighter future than the traditional one since the traditional one would imply that the time of productivity gains is over, which is argued by the techno-pessimists. However, the adjusted measure is in line with the fact that productivity growth might revive (Brynjolfsson and McAfee, 2014).

If governments accept the adjusted measure as a true measure of multifactor productivity growth, this could have major policy implications. Governments should stimulate those sectors that have most potential to stir multifactor productivity growth: computer and electronic products is a good example of such a sector. Also, this sector has major spillover effects, which will lead to an even higher outcome of multifactor productivity growth.

(26)

-26-

References

Aeppel, T. (2015, July 16). Silicon Valley Doesn't Believe U.S. Productivity Is Down. Retrieved May 30, 2017, from https://www.wsj.com/articles/silicon-valley-doesnt-believe-u-s-productivity-is-down-1437100700

Aghion, P., Bergeaud, A., Boppart, T., & Klenow, P. J. & Li, H. (2017). Missing Growth from Creative Destruction. Federal Reserve Bank of San Francisco, Working Paper Series, 01-40. doi:10.24148/wp2017-04

Andrews, D., Criscuolo, C., & Gal, P. N. (2016). The Best versus the Rest. OECD Productivity

Working Papers No. 5.OECD Publishing, Paris. doi:10.1787/63629cc9-en

Andrews, D., Criscuolo, C., & Gal, P. (2017, March 27). The productivity slowdown's dirty secret: A growing performance gap. Retrieved May 04, 2017, from

http://voxeu.org/article/productivity-slowdown-s-dirty-secret-growing-performance-gap Baily, M. N., Manyika, J., & Gupta, S. (2013). U.S. Productivity Growth: an Optimistic

Perspective. International Productivity Monitor, 25, 3-12.

Baily, M. N., Montalbano, N., & A. (2016, September 27). Why is US productivity growth so slow? Possible explanations and policy responses | Brookings Institution. Retrieved from

https://www.brookings.edu/research/why-is-us-productivity-growth-so-slow-possible-explanations-and-policy-responses/

Berndt, E. R. (1991). The practice of econometrics: classic and contemporary. Reading, MA: Addison-Wesley.

Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity

in a time of brilliant technologies. W.W. Norton & Company.

Brynjolfsson, E., & McAfee, A. (2016). Human Work in the Robotic Future: Policy for the Age of Automation. Foreign Aff., 95, 139.

Byrne, D. M., Oliner, S. D., & Sichel, D. E. (2013). Is the Information Technology Revolution Over? Finance and Economics Discussion Series (FEDS) Working Papers (2013-36). Byrne, D. M., Fernald, J. G., & Reinsdorf, M. B. (2016). Does the United States Have a

Productivity Slowdown or a Measurement Problem? Brookings Papers on Economic

Activity, 2016(1), 109-182. doi:10.1353/eca.2016.0014

Cardarelli, M. R., & Lusinyan, L. (2015). US Total Factor Productivity Slowdown: Evidence

from the US States (No. 15-116). International Monetary Fund.

Christensen, L. R., Jorgenson, D. W., & Lau, L. J. (1973). Transcendental logarithmic production frontiers. The review of economics and statistics, 28-45.

Decker, R. A., Haltiwanger, J., Jarmin, R. S., & Miranda, J. (2017). Declining Dynamism, Allocative Efficiency, and the Productivity Slowdown. American Economic Review,

107(5), 322-326. doi:10.1257/aer.p20171020

Diewert, W.E. (1974) . Applications of Duality Theory. In M.D. Intriligator and D.A. Kendrick.

(27)

-27-

Domar, E. D. (1961). On the measurement of technological change. The Economic

Journal, 71(284), 709-729.

Feldstein, M. (2015, May 18). The U.S. Underestimates Growth. Retrieved May 30, 2017, from

https://www.wsj.com/articles/the-u-s-underestimates-growth-1431989720 Fernald, J. (2014). Productivity and Potential Output Before, During, and After the Great

Recession. NBER Working Paper Series, No. 20248. doi:10.3386/w20248

Gordon, R. J. (2012). Is U.S. Economic Growth Over? Faltering Innovation Confronts the Six Headwinds. CEPR Policy Insights, 63. doi:10.3386/w18315

Gordon, R. J. (2016a). Comments and Discussion. Brookings Papers on Economic Activity,

2016(1), 158-182. doi:10.1353/eca.2016.0018

Gordon, R. J. (2016b). The rise and fall of American growth: the U.S. standard of living since the

Civil War. Princeton: Princeton University Press.

Gu, W., & Yan, B. (2016). Productivity Growth and International Competitiveness. Review of

Income and Wealth (63).

Hall, R. E. (1988). The relation between price and marginal cost in US industry. Journal of

political Economy, 96(5), 921-947.

Hansen, A. H. (1938). Full recovery or stagnation? New York: W.W. Norton.

Hatzius, J., & Dawsey, K. (2015). Doing the sums on the productivity paradox v2.0. U.S.

Economics Analyst (Goldman Sachs), 15, 30.

Hulten, C. R. (1973). Divisia index numbers. Econometrica: Journal of the Econometric Society, 1017-1025.

Hulten, C. R. (2001). Total factor productivity: a short biography. In New developments in productivity analysis (pp. 1-54). University of Chicago Press.

Hulten, C., & Nakamura, L. (2017). Accounting for growth in the age of internet: The importance of Output-Saving Technical Change. NBER Working Paper Series, No. 23315. Doi: 10.3386/w23315

Mandel, M., & Swanson, B. (2017). The Coming Productivity Boom: Transforming the Physical Economy with Information. Retrieved May 30, 2017, from

http://www.techceocouncil.org/clientuploads/reports/TCC%20Productivity%20Boom%2 0FINAL.pdf

Manyika, J., Remes, J., Mischke, J., & Krishnan, M. (2017). The productivity puzzle: A closer look at the United States. McKinsey Global Institute. Retrieved from

http://www.mckinsey.com/global-themes/employment-and-growth/new-insights-into-the-slowdown-in-us-productivity-growth

Mokyr, J. (2014). The Next Age of Invention. City Journal, 24, 12-21.

Morrison Paul, C. J. (1999). Cost structure and the measurement of economic performance:

productivity, utilization, cost economics, and related performance indicators. Kluwer

Academic Publishers, London.

Parzen, E. (1962). On estimation of a probability density function and mode. The annals of

mathematical statistics, 33(3), 1065-1076.

(28)

-28-

Annals of Mathematical Statistics, 27(3), 832-837.

Schreyer, P. (2001). The OECD productivity manual: a guide to the measurement of industry-level and aggregate productivity. International Productivity Monitor, 2(Spring), 37-51. Solow, R. M. (1957). Technical change and the aggregate production function. The review of

Economics and Statistics, 312-320.

Solow, R. M. (1987). We'd Better Watch Out. New York Times Book Review, July 12, 36.

Summers, L. H. (2016). The age of secular stagnation: what it is and what to do about it. Foreign

Affairs, 95(2), 2-9.

Syverson, C. (2016). Challenges to Mismeasurement Explanations for the U.S. Productivity Slowdown. NBER Working Paper Series, No. 21974. doi:10.3386/w21974

Teulings, C., & Baldwin, R. (2014). Secular Stagnation: Facts, Causes and Cures. London, UK: CEPR Press.

Zambom, A. Z., & Dias, R. (2012). A review of kernel density estimation with applications to econometrics. arXiv preprint arXiv:1212.2812.

Zellner, A. (1962). An efficient method of estimating seemingly unrelated regressions and tests for aggregation bias. Journal of the American Statistical Association, 57(298), 348-368.

Data

 Bureau of Economic Analysis (BEA)

 Bureau of Labor Statistics (BLS)

(29)

-29-

Appendix

Figure A.1 – Normality Plots

0 2 4 6 8 10 D e n si ty -.4 -.2 0 .2 .4 Residuals: Equation 1*

(30)

-30- 0 5 10 15 20 D e n si ty -.1 -.05 0 .05 .1 Residuals: Equation 2*

* Formula (7) with iLas defined in the text

0 5 10 15 20 D e n si ty -.2 -.1 0 .1 .2 Residuals: Equation 3*

(31)

-31-

Figure A.2 – Residual Plots

-. 4 -. 2 0 .2 .4 R e si d u a ls: Eq u a ti o n 1 * 3 4 5 6 lnY

(32)

-32- -. 1 -. 0 5 0 .0 5 .1 R e si d u a ls: Eq u a ti o n 2 * 3 4 5 6 lnY

* Formula (7) with iLas defined in the text

-. 2 -. 1 0 .1 .2 R e si d u a ls: Eq u a ti o n 3 * 3 4 5 6 lnY

(33)

-33-

Figure A.3 – Baseline Regression with Dummies

(34)
(35)
(36)

-36- 2 t 0.000524*** (0.000110) lnYhlnvLh -0.0993*** (0.00593) lnYhlnvKh -0.0506*** (0.00681) lnY th -0.00249 (0.00260) lnvLhlnvKh -0.0183*** (0.00241) lnvLht -0.00128*** (0.000184) lnvKht 0.00332*** (0.000232) Constant -6.529*** 0.684*** 0.268*** (0.384) (0.0271) (0.0319) Fixed Effects Observations Yes 976 Yes 976 Yes 976 R-squared 0.948 0.972 0.943

Notes: Standard errors in parentheses: *** p<0.01, ** p<0.05, * p<0.1. Abbreviations and explanations can be found on pp. 19

and the supplement (pp. 38).

(37)

-37-

(38)

-38-

A.5 – Variant Regression with Dummies

(39)
(40)
(41)

-41- 2 t 0.000681*** (0.000113) lnYhlnvLh -0.130*** (0.00616) lnYlnKh -0.299*** (0.0568) lnY th -0.00211 (0.00293) lnvLhlnKh 0.0434*** (0.00802) lnvLht -0.00173*** (0.000217) lnK th -0.00120 (0.00318) Constant -10.01*** 0.631*** (0.637) (0.0344) Fixed Effects Observations Yes 976 Yes 976 R-squared 0.953 0.972

Notes: Standard errors in parentheses: *** p<0.01, ** p<0.05, * p<0.1. Abbreviations and explanations can be found on pp. 19,

pp. 21 and the supplement (pp. 38).

Figure A.6 – Unit Root Tests

Harris-Tzavalis unit root test

H0: Panels contain unit roots Number of panels = 61

Ha: Panels are stationary Number of periods = 15

Measure p-value , C t- Traditional 1.0000 , scale C t- Scale Economies 0.0000 , fixity C t - Capital Fixity 0.0000 , All C t

(42)

-42-

(43)
(44)
(45)

-45- (0.00358) 2 t -0.00271*** (0.000961) lnYhlnvLh -0.115*** (0.0116) lnYhlnvKh -0.0519*** (0.0167) lnY th 0.0319** (0.0162) lnvLhlnvKh -0.0174*** (0.00294) lnvLht -0.00252*** (0.000561) lnvKht 0.00341*** (0.000780) Constant -8.240*** 0.768*** 0.266*** (2.221) (0.0534) (0.0776) Fixed Effects Observations Yes 366 Yes 366 Yes 366 R-squared 0.883 0.989 0.961

Notes: Standard errors in parentheses: *** p<0.01, ** p<0.05, * p<0.1. Abbreviations and explanations can be found on pp. 19

and the supplement (pp. 38).

(46)
(47)
(48)

-48- (0.0179) (0.00906) (0.0105) dummy57 0.0663*** 0.335*** 0.139*** (0.0245) (0.0108) (0.0116) dummy58 -0.0749*** 0.130*** 0.125*** (0.0183) (0.00920) (0.0106) dummy59 -0.0463** -0.0434*** 0.0959*** (0.0188) (0.00934) (0.0109) dummy60 0.0543*** 0.166*** 0.286*** (0.0189) (0.00902) (0.0106) lnYh 1.152*** -0.107*** -0.0673*** (0.138) (0.00750) (0.00788) lnvLh 0.709*** 0.105*** -0.0155*** (0.0340) (0.00957) (0.00306) lnvKh 0.370*** -0.0155*** 0.0394*** (0.0375) (0.00306) (0.00280) t -0.0179 -0.00130*** 0.00284*** (0.0114) (0.000277) (0.000362) 2 (lnYh) -0.0355** (0.0149) 2 (lnvLh) 0.105*** (0.00957) 2 (lnvKh) 0.0394*** (0.00280) 2 t 0.000497** (0.000209) lnYhlnvLh -0.107*** (0.00750) lnYhlnvKh -0.0673*** (0.00788) lnY th 0.00104 (0.00228) lnvLhlnvKh -0.0155*** (0.00306) lnvLht -0.00130*** (0.000277) lnvKht 0.00284*** (0.000362) Constant -4.663*** 0.709*** 0.370*** (0.328) (0.0340) (0.0375) Fixed Effects Observations Yes 610 Yes 610 Yes 610 R-squared 0.987 0.983 0.961

Notes: Standard errors in parentheses: *** p<0.01, ** p<0.05, * p<0.1. Abbreviations and explanations can be found on pp. 19

Referenties

GERELATEERDE DOCUMENTEN

Official election data has been extracted both from the historical archive of the Ministry for Internal Affairs (Ministero degli Affari Interni, s.d.) and the Global Election

If Canada wants to decrease its consumption based carbon emissions, it should critically consider to limit (the growth of) imports from China as much as possible. 2) There is

Based on the DOLS (dynamic ordinary least-squares) and FMOLS (fully modified OLS) long-run output elasticities models, renewable energy consumption has a

When a set of control variables are added (2), the significance for middle- income share becomes stronger (0.1%) and when control variables are added for industrial jobs (4),

The papers that went closer to the cited goal are series of studies conducted by Marques et al. In these studies, the authors tried to analyze the motivations driving energy

Figure 11 shows the response of relative employment, unemployment and participation rate to a one standard deviation increase in the employment growth (Δem), based on quarterly

The link between regional competitiveness and the development of human capital is primarily a result of resources gained because of the region’s competitive position vis-à-vis

This significant government balance interaction variable shows that for the CEE10 a higher government balance does lead towards a higher economic growth rate, whereas the effect