To investigate a credit channel of monetary policy transmission
Study credit channel using clustering and test the difference in mean portfolio returns. The calculated debttocapital, interest coverage, current ratio, payables turnover ratio. Analysis of stock market behavior. Comparison of portfolios’ performances.
Рубрика  Финансы, деньги и налоги 
Вид  курсовая работа 
Язык  английский 
Дата добавления  23.10.2016 
Размер файла  1,5 M 
Отправить свою хорошую работу в базу знаний просто. Используйте форму, расположенную ниже
Студенты, аспиранты, молодые ученые, использующие базу знаний в своей учебе и работе, будут вам очень благодарны.
Размещено на http://www.allbest.ru/
Размещено на http://www.allbest.ru/
Table of Contents
Introduction. Research Goals
1. Datasets
2. Analysis of stock market behavior
3. Our twostep approach: brief definition with major advantages over earlier methods
4. Comparison of portfolios' performances
Conclusion
Bibliography
Appendix
Introduction. Research Goals
This research, foremost, is aimed to give a clue to the question of how monetary policy in the United States is transmitted to the stock market. On the one hand, the decision of the Federal Reserve System about shortterm overnight lending rate (known as the Federal funds rate, or the target rate) is watched closely by market participants, and is a large common single determinant of the market response on the announcement day In (Bernanke & Kuttner, 2005) unexpected changes in the target rate explained 17% of the total variability of 1day equity returns on the announcement day.^{}. On the other hand, according to the efficientmarket hypothesis (EMH) developed in (Fama, 1970), in semistrong form of market efficiency stocks must incorporate all known past information and current news. There are debates over the time that stocks take to absorb the news, and a study of (Chan, 2001) showed that stocks tend to underreact to news. Furthermore, (Bernanke & Kuttner, 2005) showed that onemonth stock reaction is slightly larger than a oneday reaction to the announcement of the target rate, which implies a market underreaction. Nevertheless, estimates of immediate market responses to announcement of the target rate are large enough to believe that the current monetary policy objectives are mostly reflected in the stock prices. We will give estimates of market response in the first part of the work.
The real obstacle that is debated over years is the channel of transmission of the target rate announcement to the equity market. Since (Bernanke & Blinder, 1992) defined two channels (demand and credit) of policy transmission, evidences have been presented for existence of both of them. Furthermore, in (Bernanke & Kuttner, 2005) it was clear that there can be more ways by which the change in the Federal funds rate connected to equity market response.
The goal of the paper is to investigate a credit channel of monetary policy transmission. We propose new financial ratios, which, to our best knowledge, haven't been used in this field so far. Furthermore, and more interestingly, we propose a new method to investigate the existence of credit channel instead of commonly used multiple regression analysis: the towstep approach involving clustering as a first step and ttest for the difference of portfolio means for paired data We use Paired data because we compare mean returns for each day.^{} as a second step. This method is proposed foremost because current regression analysis relies on the estimate of unexpected change in monetary policy, while our methodology does not require such estimation. Scarceness of the futures data necessary for obtaining good estimates of unexpected monetary policy change makes the analysis applicable to a limited number of countries. Furthermore, ambiguity in derivation continuous futures prices makes questionable the regression estimates of credit factor effects, apart from the fact that regression equation may suffer from omitted variable bias. The latter fact means that standard errors are generally not applicable, hence, conclusions for the influence of effects should be drawn consciously. Our twostep approach does not depend on futures data, and therefore, eliminates the kind of flaws outlined above; furthermore, it enables a researcher to investigate credit channel of monetary policy for a wider range of countries. We describe the procedure, and show results for the United States in section and summarize the results in the last part of the work.
1. Data used
For the main part of our research we use a sample of valueweighted returns, which we call “S&P 500*”. We searched for stock prices for 500 companies listed in the S&P 500 index as of 31 October, 2015 for the period from March 25, 1997 to December 16, 2008. Some companies were not listed during the period; some have been private for some time during the outlined period; some companies became bankrupt, and some were the objects of acquisitions. Therefore, the actual sample consists of at least of 321 and of at most 450 stock returns but represented by companies of all 10 S&P 500 sectors, defined for this index by Standard and Poor's Consumer Discretionary, Consumer Staples, Energy, Financials, Health Care, Industrials, Information Technology, Materials, Telecommunications, Utilities.^{}.
It would be better to use a list of S&P 500 companies most closely dated to 2008 to have more stocks in the sample but such data was limited. The sample would have been enlarged by companies, which went public after December 16, 2008, however, this was the last day of change of the Fed funds rate, which is at the core of our study. The only change in the target rate, which occurred since then, is dated at December 16, 2015, however, monthly equity market values were not available for the previous month in the main source of companies' financial information for our research  COMPUSTAT GLOBAL. Likewise, we could not extend the scope of research beyond March 25, 1997 because the previous change in fed funds rate occurred on January 31, 1996 (Kuttner, 2000)^{}, and we the same data on monthly equity market values for individual companies was limited to December, 1996.
We needed monthly equity market values in order to go from raw stock price changes to valueweighted returns, in order to construct a proxy for a broad market valueweighted return and work later with valueweighted portfolio returns. So, for example, a broad market return is just a sum across all individual valueweighted stock returns, and is equal to:
;
where is a 1day index (S&P 500*) return, is a 1day raw stock return of individual company i on day t at month m, is an average market capitalization of a company i in the previous month m1, hence is a weight by market capitalization of each individual stock. This weight is stable for individual companies during a particular month, and changes across months. The sum of individual companies' returns on day t gives a valueweighted return for that day. We use this method for every day in our time range.
Concerning data on monetary policy changes, or target rate announcements, which we use as equivalent expression in the context of the work This is not true generally because monetary policy announcements include not only announcements of a new target interest rate.^{}, during 19972008 there were a total of 101 FOMC meetings, of which 49 resulted in a change in the Federal funds rate, and 52  in no action. The first of such changes occurred on March 25, 1997, and the last one on December 16, 2008.
For the later analysis we excluded a 50base point interest rate cut by the FRS on September 17, 2001 from the sample because that occurred after the terrorist attack in September 11, 2001; in doing so we followed (Bernanke & Kuttner, 2005). Hence, the resulting sample consists of 100 FOMC meetings, including 48 rate changes.
As we have noted we also used financial information for the individual companies that constituted our S&P 500 list. The financial data included standard balance sheet and income statement entries, extracted from the COMPUSTAT GLOBAL database. Based on this information we calculated 4 financial ratios: debttocapital, interest coverage, current ratio (current assets divided by current liabilities), and payables turnover ratio.
2. Analysis of stock market behavior
Before we proceed to analysis of credit channel of monetary policy transmission we need to compare our data on stock returns with that used in study of (Ehrmann & Fratzcher, 2004). They used the same data as (Bernanke & Kuttner, 2005), which is provided by CRSP institute (Ehrmann & Fratzcher, 2004)^{}, and which represents 500 largest public US companies. Periods of both studies are the same. We have to indirectly compare two sources of data in order to be reduce “measurement error” factor. It is not possible to eliminate the measurement error entirely due to the peculiarities of our data discussed above, however, at most we can hope that the S&P 500* broad market index of valueweighted stock returns sufficiently reproduces the true CRSPbased market return. To investigate the latter point, we compare regression results of the general model in (Bernanke & Kuttner, 2005). However, we cannot directly compare two datasets because their time frames are different.
That is why we obtained data by CRSP institute aggregated in 10 sizesorted portfolios by K.R. French. Using cutoff levels of market capitalization (published in the same resource) we created an aggregate CRSP valueweighted market index, and call the sample of returns as “CRSP 500*”.
Table 1. Descriptive statistics
Source  Period 
Bernanke (2005)  CRSP vwr May 1989  December 2002 
CRSP 500* vwr March 1997  December 2008 
S&P 500* vwr March 1997  December 2008 
Corr. coefficient between CRSP 500* and S&P500* vwr 

FOMC meetings (event days) 
131 
100 
100 

Rate changes (action d.) 
54 (May 89  Jan 94 / Feb 94  Dec 02) 
48 (Mar 97  Dec 08) 
48 (Mar 97  Dec 08) 

St.dev. of eq. ret. on event days 
0.80 / 1.26 
1.413 
1.565 
95.72% 

St.dev. of eq. ret. on nonevent days 
0.71 / 1.11 
1.356 
1.419 
97.57% 

St.dev. of eq. ret. on action days 
 
1.506 
1.710 
95.67% 

The table reports selected descriptive statistics for federal funds rate changes for two overlapping periods, and the CRSP valueweighted returns as in Bernanke (2005), and valueweighted equity returns in the S&P 500* and in the CRSP 500*. They are mentioned in column headings. All statistics exclude the September 17, 2001 observation. Numbers are in percentage points. St.dev. means standard deviation, eq.ret. stays for equity returns, vwr stays for valueweighted returns, corr. means correlation.
First of all, we note that estimates are not the same between the two samples: the CRSP 500* and the S&P 500*. The mentioned difference in standard deviation comes foremost because the number of companies in the latter index is significantly less than that in the CRSP 500* for earlier times. We see that for the largest coinciding sample of nonevent days standard deviations are equal up to 1 decimal point (1.4%), while actual difference is 5bp. As samples are getting smaller, they include only FOMC meetings or even action days, and the difference in standard deviations enlarges up to 16 and 20 bp, respectively. In all instances, our sample tends to overestimate volatility of the whole market.
Correlation analysis of returns over similar samples reveals that on event and action days estimates of market returns deviate further. This is probably due to smaller number of observations.
The main qualitative finding is that equity markets have become more volatile through time. Standard deviation of equity returns on all nonevent days was at least 25 and 65 bp higher for the period 19972008 than for two periods 19942002 and 19891994, respectively. On event days (every FOMC meeting, excluding Sep 17, 2001), standard deviation of equity returns is higher by at least 15 and 41 bp, respectively. We also note much higher volatility of stock market on action days during the later period.
In the Table 2 columns (a), (b) and (c) show estimates of the second regression equation in (Bernanke & Kuttner, 2005):
where is a broad market return, is an expected portion,  surprise portion of the actual change of the target rate.
Table 2. The Response of Broad Equity Market to Announcements of the Federal Funds Interest Rate.
Regressor 
Bernanke (2005) (a) 
CRSP 500* (b) 
S&P 500* (c) 

Intercept 
0.12 (1.35) 
0.31 (0.16) 
0.41 (0.18) 

Expected change 
1.04 (2.17) 
0.83 (1.44) 
0.37 (1.67) 

Unexpected change 
4.68 (3.03) 
2.84 (2.97) 
5.19 (3.68) 

R^{2} 
0.171 
0.085 
0.137 

Prob > F 
0.0000 
0.2045 
0.1076 

The table reports the results from regressions of the 1day valueweighted index returns on the expected and unanticipated components of the actual Federal funds rate change (columns (a), (b) and (c)). In a sample denoted “Bernanke (2005)” there are 131 observations; in both of our samples there are 100 observations, of which 48 are rate changes and 52 are meetings that resulted in no rate change. All values are written in percentage terms. Parentheses include tstatistics, calculated using heteroscedasticityconsistent estimates of standard errors.
R^{2} and explanatory power of a basic equation has diminished comparatively to (Bernanke & Kuttner, 2005) estimate, presumably because of chaotic market reaction during two crises, and fewer number of observations. However, an estimate of market reaction to unexpected change in target interest rate increased (5.19% vs. 4.68%) but was insignificant; increase in estimate may signal a more stressful situation in our sample. Expected change and an intercept remained highly insignificant.
Interestingly, we obtained more close estimates to those in (Bernanke & Kuttner, 2005) using S&P 500* index return as a gauge for a broad equities' market, comparatively to CRSP 500*. Thus, it can be a case that the CRSP 500* may contain some measurement bias itself, which makes it impractical to compare only it with our sample.
Returning back to regressions, we might want to insert slope dummy variables to control for the two recessions (namely, impose dummies for periods Mar 2000  Oct 2002, and Oct 2007  Dec 2008). The fit of the model will generally improve (Bernanke & Kuttner, 2005)also controlled for two periods (19891994 and 19942001) based on when the FRS started to announce its target rate changes to the public.^{}:
Table 3. The Response of Broad Equity Market to Changes of the Federal Funds Interest Rate.
Regressor 
Full sample 
Sample of 48 FOMC actions 

(a) 
(b) 
(c) 
(d) 
(e) 
(f) 

Intercept 
0.41 (0.18) 
0.37 (0.12) 
0.44 (0.12) 
0.41 (0.20) 
0.56 (0.22) 
0.52 (0.24) 

Expected change 
0.37 (1.67) 
0.12 (1.48) 
0.10 (1.51) 
0.11 (1.39) 
0.02 (1.29) 
0.02 (1.31) 

Unexpected change 
5.19 (3.68) 
5.04 (3.39) 
5.89 (3.21) 
2.07 (4.30) 
7.60 (2.09) 
5.96 (3.19) 

Unexp x 2 Crises 
 
1.26 (5.66) 
 
 
 

Unexp x 0002 Crisis 
 
 
6.97 (4.85) 
 
2.82 (4.49) 

Unexp x 0708 Crisis 
 
 
 
10.34 (7.62) 
8.57 (8.46) 

R^{2} 
0.137 
0.160 
0.162 
0.234 
0.286 
0.294 

Prob > F 
0.1076 
0.098 
0.141 
0.025 
0.002 
0.017 

The table reports the results from regressions of the 1day S&P 500* valueweighted returns on expected and surprise components of changes in the Federal Funds rate (column (b)). Then (col. c) we introduce a dummy variable for both crises periods (sets 1 if the unexpected change falls either on the 200002 or 200708 period), a separate dummy for each crisis period (col. d and col. e) and a combination of these dummies (col. f). The sample contains 48 observations  FOMC meetings  for the period Mar 1997  Dec 2008, when the target rate was changed. For the reference the equation results on the full sample from col. c (Table 2) are inserted in the column (a). All values are in percentage terms. Parentheses include tstatistics, calculated using heteroscedasticityconsistent estimates of standard errors.
We see that control for combined crises periods does not improve equation, however, imposing a control for different crises works better. We conclude that the highest and most significant coefficient estimate of unexpected rate changes is 7.60% when 20072008 crisis dummy is included alone. In this specification Fstatistic has the lowest pvalue among all 6 equations, and R^{2} is very high (28.6%). Interestingly, and, somewhat controversially, is that the coefficient estimate of the unexpected changes is positive and unusually high (10.34%), though insignificant. The estimate means that during economic recession a stock market comoves with the change in target interest rate, which historically means that stock market declines as the target interest rate is decreased. This may be as a result of market dissatisfaction with the movement of the Federal Reserve System, and expectation of stronger action. Controversially, futures decomposition (see the table in the Application) shows that negative unexpected changes were higher in absolute value than positive unexpected changes, and the former also were more frequent; hence, the market tended to underestimate the reaction of the FRS. It might also be a case that orthogonality broke during the crisis. Orthogonality means that the FRS does not cut rates following a stock market decrease (Bernanke & Kuttner, 2005)^{}. (Bernanke & Kuttner, 2005) said that there was no clear evidence for that. Another possible explanation is that during long stock market and economic recession investors look for more structural changes from government authorities, and the influence of a single instrument  money supply  diminishes for a time.
Notwithstanding the true reason of this stock market behavior, we need to pay a special attention to periods of both economic downturns when analyzing factor portfolios.
To conclude about the validity of the S&P 500* sample, we have to bear in mind sample that it might be subject to some measurement error, expressed in the fact that it overestimates a broad market return. We do not know, whether it overestimates particular companies, which is the worst case, or it overestimates equally every company's return in the sample. If the last point is true, then qualitatively our conclusions of effects of financial ratios will be likely to hold. We assume that the latter is true and proceed to the further analysis; after all, any sample may be subject to some degree of measurement error, which cannot be explicitly identified.
Past analysis of stock reaction to the changes in monetary policy with regression.
Analysis of the credit channel of monetary policy transmission is usually done with multiple regression analysis. The equation usually incorporates unexpected portion of actual change of the key interest rate because, as (Bernanke & Kuttner, 2005) showed, inclusion of unexpected rate change significantly improves fit of the model. In the framework of the New Keynesian Model the theory of rational expectations implies policy ineffectiveness proposition (PIP), which states that anticipated portion of monetary action does not matter for real economy, while unexpected monetary policy has effects on real variables in short run (Aksoy, 2015)^{}. As stocks are rights on cash flows generated by real assets, such as factories and equipment, they can be thought of as representatives of real economic variables, which must react in the shortrun to unexpected monetary actions, according to the theory.
In order to count for expectations of Federal funds raw change prior to its announcement there had been proposed a variety of different models. However, (Krueger & Kuttner, 1996), and later (Gurkaynak, 2005), showed that the usage of Federal funds futures rates provides efficient and precise estimates of surprise components of the raw target changes. This idea was thoroughly discussed in (Kuttner, 2000) and applied to testing the relationship between the Federal funds rate and bond rates. In that paper, K. Kuttner defined advantages of his method as:
First, there is no issue of model selection; second, the vintage of the data used to produce the forecast is not an issue; and third, there are no generated regressor problems.
The underlying idea of this method is that the federal funds futures price on the day prior to the FOMC's announcement “embodies nearterm expectations of the Federal funds rate” (because, essentially, 30day futures rate is an average of the effective federal funds rate over the past 30 days). Consequently, a change in the settle price of the futures contracts at the end of the announcement day (t_{a}) reflects a surprise portion of the actual change in the interest rate. However, as thoroughly explained in (Kuttner, 2000), the difference in settle futures prices must be scaled up by a fraction that reflects the number of days in the month, which will be affected by the change in interest rate:
,
where is a surprise component of the real change in the Federal Funds rate, and are the currentmonth futures rates 30day futures on Federal Funds rate are traded at the Chicago Board of Trade (CBOT), and are quoted as 100 minus the contract's price.^{} on the day of announcement and on the previous day, and T_{m} is the number of days in the month. The expected component (), therefore, is defined as the difference between the actual change () and the surprise component:
.
Then this method provides “a nearly pure measure” See (Kuttner, 2000)^{} of the oneday surprise change in the Federal funds rate under the assumption of no further changes within the month. This assumption “is not entirely justified” Similarly.^{}: during 1989  2008 there had been 6 cases when two changes occured within a month: July 1989, December 1990, and December 1991, January 2001, August 2007, and October 2008. Another potential issue left is the endmonth noise in the effective funds rate, which is minimized by using unscaled change in 1month futures rate when the target change occurs within the last three days of the month See Bernanke & Kuttner (2005), and Kuttner (2001) for further details.^{}. Similarly, the unscaled change in 1month futures is used when the Federal funds change occurs on the first day of the month See the same sources.^{}.
In the paper (Bernanke & Kuttner, 2005) analyzed broad market reaction on the Fed funds rate change. They applied two models:
and showed that the second equation is better because instead of using actual changes in the target rate as one explanatory variable it embeds decomposition of the raw change into expected and surprise components. Therefore, the second equation accounts for unfulfilled expectations about the new target rate.
Based on this conclusion (Ehrmann & Fratzcher, 2004)^{}, (Ehrmann & Fratzcher, 2004) applied an aggregate threefactor regression model to estimate firmspecific effects on monetary policy effect:
where s_{t}  is an unexpected portion of a monetary policy change, x_{i,t } is a specific factor characteristic of a firm, either timevarying or fixed (industryspecific). Z index of the xvariable also indicates, whether a firm's characteristic falls in low, or high category, in order to compare effects. Surprise rate changes were estimated using Reuters polls of expectations, which were backed by similar conclusion using Kuttner's fed funds futures method. The authors aggregated data on S&P 500 companies across 9 years (19942003, 79 FOMC meetings), with some stocks not being observed at some meetings (though authors did not provide the numbers). They estimated data using OLS with panelcorrected standard errors, which corrects for heteroscedasticity and assumes correlation of residuals (correlation across stocks, e.g. of the similar industry, on a particular day).
Among other factors that might explain heteroschedastic reaction of stocks to unexpected changes of the target interest rate, (Ehrmann & Fratzcher, 2004) explored credit channel of monetary policy transmission. We summarize their finding in the following table:
Table 4
Significant factor 
Proxy 
Conclusion 

1. 
Firm's size 
Number of employee. Market capitalization 
Small firms react more strongly to monetary shocks than mediumsized or large firms. 

2. 
Size of cash flows 
Cash flow/Net Income 
Firms with low cash flows react “significantly” more strongly to monetary shocks. 

3. 
Financial (credit) constraint 
Moody's investment rating. Moody's bank loan rating 
Firms with either good rating are more “immune” to monetary policy shocks than firms with “poor” ratings. 

4. 
Financial (credit) constraint 
Debttocapital ratio 
“Nonlinear effect”: firms with either very low and very high ratios react more strongly to monetary shocks. 

5. 
Future earnings 
Trailing P/E 
Firms with higher P/E are affected more strongly by monetary shocks. 

They found that firms with limited ability to finance their operations are affected more severely than unconstrained firms. That is, either small firms, firms with low cash flows, firms with poor investment or bank loan rating, and high P/E are affected pore significantly than their counterpats. These results were in line with initial hypothesis of the authors (Ehrmann & Fratzcher, 2004)^{}. Controversially, firms with either low or high DebttoCapital ratio were found to be affected more strongly by the surprise change in the target interest rate. The authors explained this result as firms with low DebttoCapital are already constrained with bank loans and may be limited in the ability to increase financing through debt issuance.
The presence of low debt firms with high cash flow and good bank rating As of 2015, these are, for example: IBM, Apple, WalMart^{} gives rise to question this conclusion. Therefore, we will explore this ratio in our work, among the others, which we propose.
3. Our twostep approach: brief definition with major advantages over earlier methods
In contrast with common approach of estimating firmspecific effects with multiple regression we propose a twostep approach, which consist of:
1. Clustering
2. Test the difference in mean portfolio returns
Clustering is aimed to find cutoff levels in factors, which are then utilized to form portfolios of stocks, whose mean returns are tested on statistically significant difference. The null hypothesis is that mean returns of portfolios of different stocks are not statistically different, which implies that a factor does not explain heterogeneous stocks' reaction to surprise target rate changes. If we have sufficient evidence to reject the null hypothesis, then in context of the work we conclude that a factor contributes to a credit channel of monetary policy transmission on equities market.
We believe that our approach has advantages over the commonly used multiple regression method:
Table 5
No estimate of a surprise ate change 
Presence of such a variable in multiple regression equation requires aggregating specific data, which might narrow researching field to countries, for which this kind of data is available. The data can either consist of experts' expectations drawn from surveys, or the shortterm futures on the implied target rate. A researcher can avoid this data using indirect methods of extracting unexpected rate change, such as identified vector autoregressive models (VAR) ^{18} (Bernanke & Kuttner, 2005)^{}; however, futures “more cleanly isolates the unanticipated element of policy actions”. As a target rate change is not involved in clustering algorithm, which concentrates solely on firm 

No omitted variable bias. 
A multiple regression such that in (Ehrmann & Fratzcher, 2004) which investigates each significant factor at once may suffer from omitted variable bias, which may lead to inconsistent standard errors of estimates. 

There are two main disadvantages of the approach:
1. No direct method for testing results of kmedians clustering. Therefore, the reliability of estimates of true cutoff level is questionable.
2. Timeconsuming. Implementing the approach is somewhat more difficult than that of multiple regression.
Truly, before we can rely on the cutoff estimates a more sophisticated research of applicable clustering algorithms can be done. We experimented only with kmean and kmedian clustering methods, and did not change their standard similarity measures (distances between objects).
However, we also cannot say that our estimates are “bad”, meaning that they are far away from true gauge levels. They may be very close but we just do not have a direct method of testing this. With regard to this discussion, the last advantage is reduced. Notwithstanding this, these estimates can be a useful starting point for differentiating companies based on a specified financial ratio. After all, according to (Everitt, 1993), “Clustering methods are intended largely for generating rather than testing hypotheses”.
Concerning second issue, more time is somewhat compensated by fewer data inputs.
This work introduces the new approach, and, hence, does not explore its wide applicability. Apart from the disadvantages of the approach, we list the most important limitations of our work in the table:
Table 6. Limitations of the work
1. 
Onefactor analysis, and one cutoff level to divide companies in only “high” and “low” portfolio categories. 
The method implies that several factors may be tested at once. Clustering can be done in ddimensionsl space, where d1 is a number of factors combined. Portfolios can be easily formed by several factors, for a total number of (d1)*(k1), where k is a number of clusters, k1 is a number of cutoff levels. 

2. 
Test for the differnence in portfolios' mean returns is limited to a simple ttest for paired samples, whose variance is unknown. 
This limitation follows directly from the first one. To test the difference in several mean returns at once, a more sophisticated statistical method, such as ANOVA for paired data, can be easily applied. 

3. 
Data is restricted to a US stock market. 
Our approach simplifies requirements for input data, hence it may be extended to analyze a wide range of countries. 

4. 
Clustering method 
We used a kmedians clustering algorithm, which is an exclusive, intrinsic, partitional way of classification. This method worked better than kmeans clustering only for the input data in this work, which in no means exhausts testing possibilities: kmeans may work better for other factors, and generally there might be other clustering algorithms, which more efficient estimates of true cutoff portfolio levels. 

To conclude, we believe that if the method is advanced further, it will find more applications in the literature of estimating factor effects. For now, we start with basic steps and present results for a small number of companies.
3C. Kmedians cluster analysis and portfolios' breakpoints.
First of all, we eliminate all missing information from our initial datasets to form uniform samples on each day. The resulting samples contain data on 4 financial ratios and valueweighted returns for a number of companies from the S&P 500 index, which we define as samples “before outliers”. The actual number of companies in each such sample is referred to in the “Applications” section. It depends firstly on the availability of data in the COMPUSTAT GLOBAL database for the companies in S&P500 list as of October, 2015. Undoubtedly, there are fewer companies in earlier periods, than in later ones because new companies had emerged during the 2000's and some older firms, once in the S&P500, had gone bankrupt. They do not appear in our dataset. Secondly, a small number of companies in the S&P 500 list did not have stock price information for some past periods. They may have been private before going public, or may have been merged and gone private after being public, therefore they could have either changed the ticker, or been delisted from the stock exchange. These are the two only sources of missing information.
After we have worked through the missing data we started the first round of kmedians cluster analysis. Kmedians is just a variation of kmeans clustering, which is a simple and popular method of a partitional classification. Essentially, in order to understand the basic principles of the kmeans clustering it is important to understand its place among the whole set of methods to classify data.
“Clustering is a special kind of classification” (Jain & Dubes, 1988)^{}. Kmeans clustering is referred to exclusive, intrinsic, partitional type of classification in the textbook “Algorithms for Clustering Data” by (Jain & Dubes, 1988) (following a tree of classification problems proposed by (Lance & Williams, 1967)).
Excusive classification assigns an object to one particular cluster, whereas nonexclusive or overlapping allows one object being labeled to more than one clusters. An example of the first is the division of students on the basis of test results, an example of the second  classification of the winners of Grand Slam in tennis: some players have won more than one different Grand Slam tournaments.
Intrinsic classification does not use category labels a priori assigned by a researcher to an object. The task is therefore to find a similar property for some objects, which at the same time differentiates them from other objects in initial sample of observations. On the contrary, extrinsic classification relies on the choice of dissimilarity matrix made by the external observer, or a “teacher”. In computer programming literature, intrinsic classification is called an “unsupervised learning”, and “is the essence of cluster analysis” ^{15} ^{17 }(Jain & Dubes, 1988)^{}.
“Exclusive, intrinsic classifications are subdivided into hierarchical and partitional classifications by the type of structure imposed on the data”. Hierarchical techniques are popular in biological, social, and behavioral sciences because of the need to systematize data in consequent groups, or levels. We usually find the application of hierarchical method in the field of linguistics, when a researcher is interested in constructing families of languages. Another prominent example in biological sciences is the classification of an object to consecutive levels, such as family, genus, species and class.
Partitional classification refers to a clustering, whose result is a set of clusters, or groups, of similar objects, whereas clusters are distinct from one another. As described in the textbook (Jain & Dubes, 1988) “given n patterns in a ddimensional metric space” this method divides patterns into K clusters, such that “the patterns in a cluster are more similar to each other than to patterns in different clusters”. Then a clustering criterion, either a global or a local, must be defined. A global criterion utilizes cluster “prototype”, or unique characteristic, and assigns the objects to clusters according to similar characteristics. A local criterion divides objects into clusters following local similarities, such that clusters can be formed by identifying highdensity regions, for example. An example of a clearly defined criterion is a squareerror, which is a squared distance between objects in a ddimensional d  is a number of factors in the initial set of data, for example, a valueweighted stock price difference and a debttoequity ratio.^{} vector space, which must be minimized for withincluster objects.
There are hundreds of clustering criteria, and, hence, partitional clustering algorithms, because there is no unique definition of a cluster: clusters vary in size and form. So, the result of partional clustering depends on the researcher's choice of the specific algorithm. Furthermore, there is no straightforward procedure of testing the alternative clustering ways, like the Ftest of improvement in fit for multiple regression, for example.
We chose one of the simplest and commonly known method  kmedian clustering. Kmedian is a counterpart of kmean clustering, described in the work of MacQueen (1967); kmedian is an exclusive, intrinsic, and partitional type of clustering the data, as the kmean algorithm. The only difference between kmedian and kmean is that the former algorithm defines cluster centers as medians, not means. Therefore, the task is to form compact clusters around k number of centers, so as to minimize the Manhattandistance (absolute value) between observations. These observations in our work are the points in twodimensional Eucledian space (because we investigate the influence of one factor only). Minimization of distance between points defined as absolute value is distinct from minimization of squared error function, which is the objective of kmean clustering. Some say that kmedian algorithm is more reliable for discrete data sets, while (Jain & Dubes, 1988) suggest that a researcher tries several clustering algorithms.
We applied both kmean and kmedian clustering method, and chose the latter because it suited better both our objectives. The first was to form approximately equal groups of companies. That is to divide a sample of data in two clusters containing approximately equal data points. The second objective was to make nonoverlapping groups of factors, which means that the maximum value of a factor within one group is less than the minimum value of this factor in another cluster. In the raw data (not cleaned of outliers) kmedian clustering provided better results than kmean clustering.
Kmedian is also confused with kmedoids algorithm, whose centers of clusters are the observations from the raw data, however, the kmedian clustering may produce centroids, which are not part of initial dataset. However, we do not concentrate on the centers of clusters. We need to find the maximum value of a factor within the “lower” cluster and the minimum value of this factor within the “higher” cluster in order to find a cutoff level for a portfolio. That is why we specified a crucial objective for a clustering algorithm that clusters contain nonoverlapping ranges of factor values.
The procedure of kmedian clustering is the following The description of the algorithm is taken from the textbook (Jain & Dubes, 1988)^{}:
1. Select an initial partition with K clusters.
Repeat steps 2 through 5 until the cluster membership stabilizes.
2. Generate a new partition by assigning each pattern to its closest cluster center.
3. Compute new cluster centers as centroids of the clusters.
4. Repeat steps 2 and 3 until an optimum value of the criterion function is found.
5. Adjust the number of clusters by merging and splitting existing clusters or by removing small, or outlier, clusters.
There are several problems with kmedians clustering. First of all, the number K is ambigious and dependent on the choice of researcher. For our research, we set K=2. Therefore, there is no guarantee that an optimal number of clusters is selected. As a consequence, in the literature it is suggested to try different number of clusters. We chose between 2, 3, 4, 5, and 6 and the better results were achieved using only 2 clusters. We infer that an increased number of clusters forced the clustering algorithm to sort out outliers in small groups, which was a reason for generally unequal groups in any of the factors tested. Only when K was 2, approximately equal groups of companies were chosen in the raw data (based on debttoequity or current ratio dissimilarity measures). We need approximately equal groups so that in the next step of our research portfolios of stocks contain approximately equal number of companies. Furthermore, (Jain & Dubes, 1988) suggested that some clustering algorithms identify “small” clusters as outliers.
Second problem is that a statistical package uses heuristic algorithms to sidestep complex iterative partitions of large number of objects. One of the consequences of this is the convergence to a local optimum: because there is a constrained number of iterations (in Stata, we used a default number  10,000 iterations) and because number of clusters k, chosen by a researcher, could be less than optimal. It is possible, that using 7 or more clusters will generate more equal groups in all 3 factors tested DebttoEquity, Tobin's Q and Interest Coverage.^{} and contribute to more precise estimates of cutoff levels. The consequence of the fact of high possibility of convergence of a kmedian clustering to a local optimum, not global one, is that we may have assigned companies to “wrong” portfolios based on “wrong” cutoff levels. However, there is no way to test this other than to conduct several studies or, probably, to use a MonteCarlo simulation; this is outside the scope of the present work.
Furthermore, as a result of simplified computations the usage of larger sets of data leads to smaller precision of estimated centroids, and hence, cutoff estimate may be far away from true values.
Consequently, kmedian clustering cannot be applied to any combined set of the raw data; in particular, we cannot merge the data on stock returns and companyspecific factors from 48 days into one large data set and run a kmedian clustering. The issue is the consequence of two problems outlined above. This makes it impossible to pool data from all days into one sample to determine a cutoff level at once.
A solution to the problem is to determine cutoff levels on each of the day, and then weight the sum of cutoffs to get one common cutoff for each factor. Obviously, a process of weighted summing of cutoff levels for each of the day in the sample is a verytime consuming process because one needs to run clustering for each variable for each day. For 48 days and 3 factor variables this converges to running a kmedian clustering 144 times. The problem intensifies after one checks each factor variable for outliers and then runs another set of kmedian clustering for a total of 288 attempts. This makes the whole method too clumsy and makes it less efficient in terms of time tradeoff.
Hence, we proposed to combine data following two criteria. The first is the target official rate absolute change and the direction of this change. They were the same for merged samples. Another important factor that may influence the clustering procedure is the distribution of factor values. We set a restriction that medians in each of factor values on each day in new combined sample are approximately equal to each other. Means are not applicable because they are influenced by the outliers. Furthermore, we cannot use a standard deviation as factor distributions have skewed tales. The presence of several severe outliers, according to (Hoaglin, et al., 1981) gives an evidence that samples were not drawn from normal distributions.
However, a criterion “approximately equal” gives rise to ambiguity. We used a 3% density of sample observations around the median value. So, for example, if there are 101 observations in any two samples, we compare the 51^{st} observation (the median) in the first sample with a range of values of 3 central observations from the second sample. If the median is inside the range, we merge two samples (given that the first criterion is satisfied), regardless whether the median is an observation, or the simple average of two close observations.
So, we constructed 8 groups of raw data based on the criteria, and cleared samples from mild and severe outliers A criteria for outliers was a simple lettervalue plot embedded in Stata. It produced in output (example of which is in Application) of a median value, and the number of mild and severe outliers for each sample, among other statistics. Mild outliers were defined as observations greater than 3/2 IQR (interquartile range, or the difference between 75% and 25% percentiles in sorted sample), severe outliers  greater than 3 IQR.^{}. We weighted the resulting cutoff levels for each of three factors according to the formula:
Which means that we give more weight to a cutoff, C_{i} of the group, which has the largest number of observations.
In turn, C_{i} equals simple average of the minimum value of observation in a cluster with the higher median and the maximum value of observation in a cluster with lower median, i.e. in the second cluster because there are only two clusters.
Table 7. Cutoff levels for factor portfolios.
Cutoff levels 
Group 
DEBTTOCAPITAL, % 
TOBIN'S Q 
INTEREST COVERAGE 

75 
1 
33.595 
1.372099 
16.63 

50 
2 
35.495 
2.0715 
10.7835 

25_1 
3 
24.175 
2.22044 
20.966 

25_2 
4 
35.41 
2.322599 
9.9945 

25_3 
5 
26.66 
2.272133 
15.628 

25_1 
6 
31.505 
2.516775 
11.3195 

25_2 
7 
29.21 
2.274438 
15.291 

50 
8 
35.86 
1.991328 
10.768 

Number of observations 

75 
1 
500 
483 
458 

50 
2 
1561 
1451 
1382 

25_1 
3 
454 
428 
410 

25_2 
4 
794 
753 
700 

25_3 
5 
726 
715 
651 

25_1 
6 
999 
923 
866 

25_2 
7 
3905 
3782 
3433 

50 
8 
197 
185 
173 

total 
total 
9136 
8720 
8073 

Resulting Cutoff 
31.004 
2.111 
12.308 

So, we differentiate our portfolios into Low and High categories based on the fact, whether companies' financials at the given date fall below or above these cutoff levels.
4. Comparison of portfolios' performances
In this section we constructed 3x2 factor portfolios based on reported cutoff levels, and calculated their mean returns for each of the action day.
The null hypothesis is that there is no significant difference between portfolio means. This implies that the factors that we investigate do not contribute to a credit channel of monetary policy transmission. If we have enough evidence to reject the null hypothesis, we conclude that specified factors do explain heterogeneous stock reaction to surprised target rate change, and that this various reaction is explained by different attitude to financial constraint by firms.
Following the papers of (Ehrmann & Fratzcher, 2004) and (Owen, et al., 2001) we used a Tobin's Q and DebttoCapital measures of financial constraint faced by a firm. We also added a third variable to the analysis  Interest Coverage ratio. This ratio The authors named it “Interest times cover”^{} is introduced in (Leiwy & Perks, 2013) as one of the four key estimates of the company's financial health.
We calculated these ratios for each firm among 480, for which corresponding financial data (from Balance Sheet and Income Statement) from COMPUSTAT was available, according to formulas (Owen, et al., 2001) (Leiwy & Perks, 2013)^{}:
DebttoCapital = ,
where Stockholders' Equity includes Common Equity (key elements of which are common share, paidin capital, retained earnings), and Preferred Share capital.
Tobin's Q = ,
where market value of Total Assets were defined as the sum of balance value of Total Assets and market value of Common Equity, minus the sum of balance value of Common Equity and deferred Longterm Taxes.
Interest Coverage = .
Concerning the first and the third ratios, they have a slightly different meaning for financial organizations, especially banks. Unlike industrial corporations, banks' interest expense is the primary source of expenses, and they usually operate with higher debttocapital ratio, as it is defined here. Therefore, we will also eliminate this entire sector and go through the same twostep procedure to obtain mean portfolio returns for each day. We also recalculated valueweighted returns, so that the weights for industrial companies summed up to 1 in reduced sample. We eliminated a total of 82 financial companies among 480 companies in our S&P 500* sample.
The fact that in our analysis we did not need to use a decomposition of actual changes in interest rate on expected and unexpected portions does not mean that we simplify the analysis by ignoring the theory of rational; expectations. In fact, correlation analysis of 3x2 factor portfolios' returns with changes in target interest rates revealed similar patterns as regressions in the first part of the work did: there is little correlation between equities' returns and actual federal funds rate changes, approximately no correlation between portfolio stock returns and expected rate changes, and high degree of correlation (up to 50% in absolute value) between returns and surprise rate changes:
Table 8
F1 
actual 
exp 
unexp 

1 
6% 
7% 
40% 

2 
5% 
11% 
48% 

F2 
actual 
exp 
unexp 

1 
3% 
22% 
55% 

2 
17% 
12% 
18% 

F3 
actual 
exp 
unexp 

1 
3% 
14% 
49% 

2 
10% 
2% 
38% 

Why then we claimed that we do not need to decompose the rate change into two portions? The answer is that during 11 years in our sample from 1997 to 2008 only 9 times out of 49 the unexpected rate changes equaled zero. Combining K. Kuttner's estimates (Kuttner, 2000)^{} with this data, and our estimate of a surprise change on December 16, 2015 The first federal funds rate change since Dec 16, 2008^{} (consistent with our previous estimates and equal to 2 bp) we see that for the last 26 years (from 1989 to 2015) there were only 11 times out of 85 action days when the market fully expected the rate change. Therefore, we impose an assumption, which is at least true for the United States, that the market is not able to consistently fully anticipate a key interest rate change. However, in order to enlarge our approach for research of the role of financial ratios in explaining heterogeneous equities' reaction, it is necessary to investigate how well market participants in particular country anticipated a rate change.
Table 9. Mean Portfolio Returns Full Sample
Factor 
DebttoCapital 
Tobin's Q 
Interest Coverage 

High Portfolio 
8.789 (4.018) 
20.773 (9.223) 
19.659 (9.158) 

Low Portfolio 
22.085 (9.501) 
10.184 (5.800) 
11.249 (4.337) 

Table 10. Mean Portfolio Returns Excl Financials
Factor 
DebttoCapital 
Interest Coverage 

High Portfolio 
12.192 (7.772) 
48.018 (20.686) 

Low Portfolio 
33.798 (15.356) 
6.596 (3.832) 

We have found evidence that the three ratios do influence stocks performance on the days of change of monetary policy. The strongest evidence is for the DebttoCapital ratio. In both full and reduced sample a group of companies with the ratio less than 31.004% and less than 50% For a reduced sample we obtained different cutoff levels (50% for DebttoCapital ratio and 2.0 for Interest Coverage ratio) in order to make clusters equal.^{} respectively outperformed groups of higher indebted companies. The results are significant at 3% and 1% significance levels respectively. In the full sample companies with Interest Coverage ratio below 12.308 outperformed their counterparts, though the result is significant only at 10% level. However, in the reduced sample firms with Interest Coverage ratio below 2.0 outperformed another group, with the result significant at 2% level. This interesting finding suggests that we might have obtained not optimal cutoff level in the full sample, which could be the consequence of a large number of outliers for this particular ratio (on average, there were much less outliers in samples for Tobin's Q and even fewer for DebttoCapital ratio). As for the last measure of a financial constraint, Tobin's Q, a Lowcategory portfolio of companies with Q < 2.111 outperformed Highcategory portfolio, though this is significant only at 10% level.
Подобные документы
Capital Structure Definition. Tradeoff theory explanation to determine the capital structure. Common factors having most impact on firm’s capital structure in retail sector. Analysis the influence they have on the listed firm’s debtequity ratio.
курсовая работа [144,4 K], добавлен 16.07.2016Example of a bond valuing. Bond prices and yields. Stocks and stock market. Valuing common stocks. Capitalization rate. Constant growth DDM. Payout and plowback ratio. Assuming the dividend. Present value of growth opportunities. Sustainable growth rate.
презентация [748,8 K], добавлен 02.08.2013Strategy of foreign capital regulation in Russia. Russian position in the world market of investments. Problems of foreign investments attraction. Types of measures for attraction of investments. Main aspects of foreign investments attraction policy.
реферат [20,8 K], добавлен 16.05.2011Causes and corresponding types of deflation. Money supply side deflation. Credit deflation, Scarcity of official money. Alternative causes and effects. The Austrian and keynesian school of economics. Historical examples: deflation in Ireland, Japan, USA.
реферат [45,6 K], добавлен 13.12.2010Fisher Separation Theorem. Consumption Vs. Investment. Utility Analysis. Indifference Curves. The satisfaction levels. Indifference Curves and Trade Off between Present and Future Consumptions. Marginal Rate of Substitution. Capital Market Line.
презентация [1,5 M], добавлен 22.06.2015The concept, types and regulation of financial institutions. Their main functions: providing insurance and loans, asset swaps market participants. Activities and basic operations of credit unions, brokerage firms, investment funds and mutual funds.
реферат [14,0 K], добавлен 01.12.2010Types and functions exchange. Conjuncture of exchange market in theory. The concept of the exchange. Types of Exchanges and Exchange operations. The concept of market conditions, goals, and methods of analysis. Stages of market research product markets.
курсовая работа [43,3 K], добавлен 08.02.2014History of formation and development of FRS. The organizational structure of the U.S Federal Reserve. The implementation of Monetary Policy. The Federal Reserve System in international sphere. Foreign Currency Operations and Resources, the role banks.
реферат [385,4 K], добавлен 01.07.2011Brief description of PJSC "Kyivenergo". Basic concepts of dividend policy of the company. Practice of forming and assesing the effiiency of dividend policy of the company. The usual scheme of dividend policy formation consists of six main stages.
курсовая работа [1004,4 K], добавлен 07.04.2015Cash flow test and balance sheet test. The reasons of not having a single test. The definition and treatment of the debts and liabilities under the both tests. Some differences between Maltese law and UK law in this question.
реферат [10,7 K], добавлен 11.09.2006