HW Guide Part A: Replicate Fama-French 1993#
Summary
The Fama and French (1993) paper, âCommon Risk Factors in the Returns on Stocks and Bonds,â revolutionized asset pricing theory by challenging the Capital Asset Pricing Model (CAPM) with a three-factor model that includes market risk, size (market capitalization), and book-to-market value as predictors of stock returns. They demonstrated that the market factor alone does not fully explain stock returns and that small stocks and stocks with high book-to-market ratios yield higher returns than can be explained by market beta alone, suggesting that these factors capture additional risks not accounted for by the CAPM. This seminal work has significantly influenced academic research and investment practices by highlighting the importance of size and value factors in asset pricing, thereby laying the groundwork for more sophisticated models and strategies in finance.
Learning Outcomes
Work with CRSP and Compustat datasets: Gain hands-on experience in using financial data sources that provide stock market prices and firm-level accounting data, which are essential for empirical research in finance.
Leverage the CRSP/Compustat Merged (CCM) database: Understand the purpose and structure of the CCM linking table, and use it to accurately merge CRSPâs market data with Compustatâs fundamental data.
Construct Fama-French portfolios and factors: Use market capitalization and book-to-market ratios to classify firms into portfolios, calculate value-weighted returns, and compute the size (SMB) and value (HML) factors.
Compare your results with established benchmarks: Validate your manually computed Fama-French factors against the official data from Kenneth Frenchâs library and analyze any differences.
Game Plan
The code underlying this notebook follows the methodology described in https://wrds-www.wharton.upenn.edu/pages/wrds-research/applications/risk-factors-and-industry-benchmarks/fama-french-factors/ . I will quote from that methodology extensively. There is also a set of videos on this linked website. Those may be helpful for this exercise.
The code is lightly edited from the code provided by WRDS on their website.
from matplotlib import pyplot as plt
import pandas as pd
from scipy import stats
import datetime
import pull_CRSP_Compustat
import calc_Fama_French_1993
from settings import config
DATA_DIR = config("DATA_DIR")
Prep Data#
Step 1. Load Data#
In order to replicate the factors from this paper, we need data from CRSP and Compustat. The CRSP data provides the stock prices while Compustat provides the fundamentals. We also use the CRSP/Compustat Linking Table to merge the two datasets. The merge is not straightforward, but there is a standard procedure used that is embodied in the linking table provided by CRSP and available on WRDS.
The CRSP dataset contains stock market data, including prices, returns, and shares outstanding, for securities traded on major U.S. exchanges. This dataset is essential for measuring market performance and calculating variables such as market equity.
The Compustat dataset, on the other hand, provides firm-level fundamental accounting data, such as book equity, revenue, and earnings, which are used to measure financial characteristics of companies.
Finally, the CRSP/Compustat Linking Table (CCM) acts as a bridge between these two datasets by mapping CRSPâs security identifiers (PERMNO, PERMCO) to Compustatâs firm identifiers (GVKEY). This linking table resolves complexities such as multiple securities per firm, corporate actions, and differing coverage periods, ensuring accurate integration of the datasets.
I have written code to pull data from each of these datasets in the pull_CRSP_Compustat
module used below.
comp = pull_CRSP_Compustat.load_compustat(data_dir=DATA_DIR)
crsp = pull_CRSP_Compustat.load_CRSP_stock_ciz(data_dir=DATA_DIR)
ccm = pull_CRSP_Compustat.load_CRSP_Comp_Link_Table(data_dir=DATA_DIR)
Step 2. Calculate Book Equity#
âWe used Compustat XpressFeed (annual data) as the source of historical accounting data in order to calculate the value of Book Equity. Different from quarterly data, annual data is unrestated (ideal for backtesting). Book Equity is defined as the Compustat book value of stockholdersâ equity plus balance sheet deferred taxes and investment tax credit (if available) minus book value of preferred stock. We did not modify the original formula that adds deferred taxes and investment tax credit. According to Kenneth Frenchâs website (as of May 2018) , it had been changes to the treatment of deferred taxes described in FASB 109.
To estimate book value of preferred stock they use the redemption or liquidation or par value of preferred stock (in that order). Since Book Equity is almost missing for the whole sample during the 1950âs, we constrained our sample to begin in 1960. Additionally, we created a variable that counts number of annual records in Compustat files.â
comp = calc_Fama_French_1993.calc_book_equity_and_years_in_compustat(comp)
comp.info()
<class 'pandas.core.frame.DataFrame'>
Index: 571815 entries, 0 to 71814
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 gvkey 571815 non-null object
1 datadate 571815 non-null datetime64[ns]
2 year 571815 non-null int32
3 be 419033 non-null float64
4 count 571815 non-null int64
dtypes: datetime64[ns](1), float64(1), int32(1), int64(1), object(1)
memory usage: 24.0+ MB
Step 3. Subset CRSP to Common Stock and Proper Exchanges#
NOTE: I am using the updates CIZ version of the CRSP flat file. This means that I donât have to merge the CRSP event files with the time series files and I donât need to apply delisting returns, as they are already applied.
âFor the purpose of this procedure we used CRSP monthly data (users can extend this calculation to daily data). The first step in working with CRSP was to merge CRSP âeventâ and âtime-seriesâ files. CRSP event files contain historical information on the exchange code (crucial to identify firms listed in NYSE), share codes (to identify common stocks) and delisting returns. CRSP time-series files (as CRSP.MSF) contain information such as prices, returns and shares outstanding. We merged both files using a macro program (named âcrspmergeâ).â
crsp = calc_Fama_French_1993.subset_CRSP_to_common_stock_and_exchanges(crsp)
crsp.info()
<class 'pandas.core.frame.DataFrame'>
Index: 3451950 entries, 0 to 165348
Data columns (total 16 columns):
# Column Dtype
--- ------ -----
0 permno int64
1 permco int64
2 mthcaldt datetime64[ns]
3 issuertype object
4 securitytype object
5 securitysubtype object
6 sharetype object
7 usincflg object
8 primaryexch object
9 conditionaltype object
10 tradingstatusflg object
11 mthret float64
12 mthretx float64
13 shrout float64
14 mthprc float64
15 jdate datetime64[ns]
dtypes: datetime64[ns](2), float64(4), int64(2), object(8)
memory usage: 447.7+ MB
crsp.head()
permno | permco | mthcaldt | issuertype | securitytype | securitysubtype | sharetype | usincflg | primaryexch | conditionaltype | tradingstatusflg | mthret | mthretx | shrout | mthprc | jdate | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 10000 | 7952 | 1986-01-31 | ACOR | EQTY | COM | NS | Y | Q | RW | A | 0.707317 | 0.707317 | 3680.0 | 4.375000 | 1986-01-31 |
1 | 10000 | 7952 | 1986-02-28 | ACOR | EQTY | COM | NS | Y | Q | RW | A | -0.257143 | -0.257143 | 3680.0 | 3.250000 | 1986-02-28 |
2 | 10000 | 7952 | 1986-03-31 | ACOR | EQTY | COM | NS | Y | Q | RW | A | 0.365385 | 0.365385 | 3680.0 | 4.437500 | 1986-03-31 |
3 | 10000 | 7952 | 1986-04-30 | ACOR | EQTY | COM | NS | Y | Q | RW | A | -0.098592 | -0.098592 | 3793.0 | 4.000000 | 1986-04-30 |
4 | 10000 | 7952 | 1986-05-30 | ACOR | EQTY | COM | NS | Y | Q | RW | A | -0.222656 | -0.222656 | 3793.0 | 3.109375 | 1986-05-31 |
Note that when we do this with the new CIZ format, we also need to apply the following filters:
1. conditionaltype = 'RW'
Filters securities with a âRegular Wayâ trading status, indicating standard settlement transactions (typically T+2 for equities) without special conditions like halted or suspended trading.
RW
(Regular Way) ensures the security is traded under normal market operations.Excludes securities with conditional trading statuses (e.g., halted, suspended, or special settlement terms).
2. TradingStatusFlg = 'A'
Identifies securities that are actively trading (not halted or suspended).
A
(Active) means the security is actively traded on the exchange during the period.Contrasts with flags like
H
(Halted) orS
(Suspended), which indicate temporary trading interruptions.
Step 4. Calculate Market Equity#
NOTE: I am using the updates CIZ version of the CRSP flat file. This means that I donât have to merge the CRSP event files with the time series files and I donât need to apply delisting returns, as they are already applied.
âSecond, we added delisting returns (to reduce any bias in portfolio returns) and calculated Market Capitalization (ME) for each CRSP security (abs(prc)*shrout). There were cases when the same firm (permco) had two or more securities (permno) on the same date. For the purpose of ME for the firm, we aggregated all ME for a given permco, date. This aggregated ME was assigned to the CRSP permno that has the largest ME. Finally, ME at June and December were flagged since (1) December ME will be used to create Book-to-Market ratio (BEME) and (2) June ME has to be positive in order to be part of the portfolio.â
crsp2 = calc_Fama_French_1993.calculate_market_equity(crsp)
crsp3, crsp_jun = calc_Fama_French_1993.use_dec_market_equity(crsp2)
crsp3.tail()
permno | permco | mthcaldt | issuertype | securitytype | securitysubtype | sharetype | usincflg | primaryexch | conditionaltype | ... | ffdate | ffyear | ffmonth | 1+retx | cumretx | L_cumretx | L_me | count | mebase | wt | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3417043 | 93436 | 53453 | 2023-08-31 | CORP | EQTY | COM | NS | Y | Q | RW | ... | 2023-02-28 | 2023 | 2 | 0.965038 | 0.985904 | 1.021622 | 8.488212e+08 | 158 | 830857980.0 | 8.488228e+08 |
3417044 | 93436 | 53453 | 2023-09-29 | CORP | EQTY | COM | NS | Y | Q | RW | ... | 2023-03-31 | 2023 | 3 | 0.969544 | 0.955877 | 0.985904 | 8.191444e+08 | 159 | 830857980.0 | 8.191462e+08 |
3417045 | 93436 | 53453 | 2023-10-31 | CORP | EQTY | COM | NS | Y | Q | RW | ... | 2023-04-30 | 2023 | 4 | 0.802654 | 0.767239 | 0.955877 | 7.954494e+08 | 160 | 830857980.0 | 7.941983e+08 |
3417046 | 93436 | 53453 | 2023-11-30 | CORP | EQTY | COM | NS | Y | Q | RW | ... | 2023-05-31 | 2023 | 5 | 1.195379 | 0.917141 | 0.767239 | 6.384545e+08 | 161 | 830857980.0 | 6.374665e+08 |
3417047 | 93436 | 53453 | 2023-12-29 | CORP | EQTY | COM | NS | Y | Q | RW | ... | 2023-06-30 | 2023 | 6 | 1.034988 | 0.949230 | 0.917141 | 7.631954e+08 | 162 | 830857980.0 | 7.620140e+08 |
5 rows Ă 27 columns
crsp_jun.tail()
permno | mthcaldt | jdate | sharetype | securitytype | securitysubtype | usincflg | issuertype | primaryexch | conditionaltype | tradingstatusflg | mthret | me | wt | cumretx | mebase | L_me | dec_me | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
272260 | 93436 | 2019-06-28 | 2019-06-30 | NS | EQTY | COM | Y | CORP | Q | RW | A | 0.206848 | 4.002571e+07 | 3.157278e+07 | 0.651583 | 5.847846e+07 | 3.282331e+07 | 5.744195e+07 |
272261 | 93436 | 2020-06-30 | 2020-06-30 | NS | EQTY | COM | Y | CORP | Q | RW | A | 0.293186 | 2.008447e+08 | 1.495638e+08 | 4.832239 | 4.002571e+07 | 1.547848e+08 | 7.574367e+07 |
272262 | 93436 | 2021-06-30 | 2021-06-30 | NS | EQTY | COM | Y | CORP | Q | RW | A | 0.087137 | 6.688268e+08 | 5.814550e+08 | 3.147314 | 2.008447e+08 | 6.022932e+08 | 6.773402e+08 |
272263 | 93436 | 2022-06-30 | 2022-06-30 | NS | EQTY | COM | Y | CORP | Q | RW | A | -0.111888 | 7.010302e+08 | 7.461301e+08 | 0.990760 | 6.688268e+08 | 7.855649e+08 | 1.092191e+09 |
272264 | 93436 | 2023-06-30 | 2023-06-30 | NS | EQTY | COM | Y | CORP | Q | RW | A | 0.283627 | 8.308580e+08 | 6.368738e+08 | 1.166153 | 7.010302e+08 | 6.463570e+08 | 3.897415e+08 |
Step 5. Merge CRSP and Compustat#
What is the CRSP/Compustat Merged (CCM) database?#
To successfully replicate the Fama-French factors, we need to merge the CRSP and Compustat datasets. This step is not straightforward due to the differences in their data structures, coverage, and primary identifiers. CRSP data primarily tracks securities, while Compustat focuses on firm-level accounting data, making it necessary to carefully link the two. Luckily, we have access to the CRSP/Compustat Merged (CCM) database, which serves as an intermediary table designed to simplify the merging process by providing pre-established links between the datasets.
It is important to note that the CCM database is not a direct merging of the CRSP and Compustat datasets. Instead, it is a linking table that matches CRSP identifiers (such as PERMNO and PERMCO) with Compustatâs GVKEY identifier. By leveraging CCM, we can handle complexities such as companies with multiple securities, historical changes due to mergers and acquisitions, and discrepancies in data coverage between the two datasets. For example, the linking table includes fields to identify âprimaryâ matches (e.g., linkprim = 'P'
) and remove duplicate or ambiguous links, ensuring the integrity of the merged data.
The CCM database is critical because it eliminates the need to manually match identifiers across datasets, a process that can be error-prone and time-consuming. Using CCM ensures that we can accurately align CRSPâs market data with Compustatâs fundamental data, allowing us to calculate variables like the book-to-market ratio and market capitalization, which are essential for constructing Fama-French portfolios and factors. In this step, we will merge the datasets using CCM while cleaning the data to handle duplicates and ambiguities, laying the foundation for the next steps in the analysis.
In summary, the CCM provides link identifiers between CRSPâs PERMNO/PERMCO and Compustatâs GVKEY. This linking is complex because:
Companies can have multiple securities (PERMNOs)
Companies merge, split, or change names
Corporate structures change over time
The CCM database handles these complexities for us.
Continuing with the methodology#
Following along with the linked methodology from above,
âWe merged CRSP and Compustat using the CRSP CCM product (as of April 2010). We matched Compustatâs gvkey (from calendar year t-1) to CRSPâs permno as of June year t. Data was cleaned for unnecessary duplicates. First there were cases when different gvkeys exist for same permno-date. We solved these duplicates by only keeping those cases that are flagged as âprimaryâ matches by CRSPâs CCM (linkprim=âPâ ). There were other unnecessary duplicates that were removed. Some companies may have two annual accounting records in the same calendar year. This is produced by change in the fiscal year end during the same calendar year. In these cases, we selected the last annual record for a given calendar year.
After data cleaning, the book-to-market ratio for every firm in the sample were calculated by dividing Book Equity (for fiscal year that ends on year t-1) over the market value of its common equity at the end of December year t -1. These book-to-market ratios and Market Capitalization (as of December year t-1) were assigned to June year t in order to create portfolios.â
ccm_jun = calc_Fama_French_1993.merge_CRSP_and_Compustat(crsp_jun, comp, ccm)
ccm_jun.tail()
permno | mthcaldt | jdate | sharetype | securitytype | securitysubtype | usincflg | issuertype | primaryexch | conditionaltype | ... | cumretx | mebase | L_me | dec_me | gvkey | datadate | yearend | be | count | beme | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
252156 | 93436 | 2019-06-28 | 2019-06-30 | NS | EQTY | COM | Y | CORP | Q | RW | ... | 0.651583 | 5.847846e+07 | 3.282331e+07 | 5.744195e+07 | 184996 | 2018-12-31 | 2018-12-31 | 4923.243 | 10 | 0.085708 |
252157 | 93436 | 2020-06-30 | 2020-06-30 | NS | EQTY | COM | Y | CORP | Q | RW | ... | 4.832239 | 4.002571e+07 | 1.547848e+08 | 7.574367e+07 | 184996 | 2019-12-31 | 2019-12-31 | 6618.000 | 11 | 0.087374 |
252158 | 93436 | 2021-06-30 | 2021-06-30 | NS | EQTY | COM | Y | CORP | Q | RW | ... | 3.147314 | 2.008447e+08 | 6.022932e+08 | 6.773402e+08 | 184996 | 2020-12-31 | 2020-12-31 | 22376.000 | 12 | 0.033035 |
252159 | 93436 | 2022-06-30 | 2022-06-30 | NS | EQTY | COM | Y | CORP | Q | RW | ... | 0.990760 | 6.688268e+08 | 7.855649e+08 | 1.092191e+09 | 184996 | 2021-12-31 | 2021-12-31 | 30213.000 | 13 | 0.027663 |
252160 | 93436 | 2023-06-30 | 2023-06-30 | NS | EQTY | COM | Y | CORP | Q | RW | ... | 1.166153 | 7.010302e+08 | 6.463570e+08 | 3.897415e+08 | 184996 | 2022-12-31 | 2022-12-31 | 44786.000 | 14 | 0.114912 |
5 rows Ă 24 columns
Create Portfolios and Factors#
Step 6. Create Portfolios by Size and Book-to-Market.#
âEvery June (year t) we calculated the median equity value of NYSE-listed firms using Market Capitalization at June t. We used this median to classify firms as Small or Big on portfolios created at the end of June year t. In a similar fashion, as of June year t, firms are broken into three book-to-market equity groups (Low, Medium, and High) based on the 30% and 70% break-points of the NYSE-firms with positive book-to-market equity. In both cases (for size and book-to-market classification), we restricted our sample to those firms with positive book-to-market, positive market cap at June, common equity (share code 10 and 11) and at least two years in Compustat annual file.
We created a total of six size and book-to-market equity portfolios. Portfolios are created at the end of June and kept for 12 months. Within each portfolio a monthly value-weighted return is calculated (each month, the weight is the Market Capitalization as of June year t adjusted by any change in price between the end June t and the end of the previous month).â
ccm4 = calc_Fama_French_1993.assign_size_and_bm_portfolios(ccm_jun, crsp3)
ccm4.tail()
mthcaldt | permno | sharetype | securitytype | securitysubtype | usincflg | issuertype | primaryexch | conditionaltype | tradingstatusflg | mthret | me | wt | cumretx | ffyear | jdate | szport | bmport | posbm | nonmissport | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3428385 | 2023-08-31 | 93436 | NS | EQTY | COM | Y | CORP | Q | RW | A | -0.034962 | 8.191444e+08 | 8.488228e+08 | 0.985904 | 2023 | 2023-08-31 | B | L | 1.0 | 1.0 |
3428386 | 2023-09-29 | 93436 | NS | EQTY | COM | Y | CORP | Q | RW | A | -0.030456 | 7.954494e+08 | 8.191462e+08 | 0.955877 | 2023 | 2023-09-30 | B | L | 1.0 | 1.0 |
3428387 | 2023-10-31 | 93436 | NS | EQTY | COM | Y | CORP | Q | RW | A | -0.197346 | 6.384545e+08 | 7.941983e+08 | 0.767239 | 2023 | 2023-10-31 | B | L | 1.0 | 1.0 |
3428388 | 2023-11-30 | 93436 | NS | EQTY | COM | Y | CORP | Q | RW | A | 0.195379 | 7.631954e+08 | 6.374665e+08 | 0.917141 | 2023 | 2023-11-30 | B | L | 1.0 | 1.0 |
3428389 | 2023-12-29 | 93436 | NS | EQTY | COM | Y | CORP | Q | RW | A | 0.034988 | 7.898983e+08 | 7.620140e+08 | 0.949230 | 2023 | 2023-12-31 | B | L | 1.0 | 1.0 |
# create_fama_french_portfolios puts it all together and then uses
# the weights to create the portfolios
vwret, vwret_n = calc_Fama_French_1993.create_fama_french_portfolios(data_dir=DATA_DIR)
Step 7. Calculation of FF factors#
âThe size factor, Small minus Big (SMB) , is the difference of average return on the three Small-firm portfolios and the average return on the three Big-firm portfolios.
The value factor, High minus Low (HML) , is the difference between the average return on the two High book-to-market equity portfolios and the average return on the two Low book-to-market equity portfolios. For comparison purpose, we also calculate the number of firms in each portfolio.â
ff_factors, ff_nfirms = calc_Fama_French_1993.create_factors_from_portfolios(vwret, vwret_n)
Compare Results of Our Manual Calculation to the Actual Factors#
Step 8. Load actual FF1993 factors from Ken French Data Library (via WRDS)#
actual_ff = pull_CRSP_Compustat.load_Fama_French_factors(data_dir=DATA_DIR)
actual_ff = actual_ff[["date", "smb", "hml"]]
Step 9. Merge Actual with Manual Factors#
ff_compare = pd.merge(actual_ff, ff_factors[["date", "SMB", "HML"]], how="inner", on="date")
ff_compare = ff_compare.rename(columns={
"smb": "smb_actual",
"hml": "hml_actual",
"SMB": "smb_manual",
"HML": "hml_manual",
})
ff_compare_post_1970 = ff_compare[ff_compare["date"] >= "01/01/1970"]
ff_compare.set_index("date", inplace=True)
ff_compare_post_1970.set_index("date", inplace=True)
ff_compare_post_1970.head()
smb_actual | hml_actual | smb_manual | hml_manual | |
---|---|---|---|---|
date | ||||
1970-01-31 | 0.0293 | 0.0313 | 0.030789 | 0.028929 |
1970-02-28 | -0.0258 | 0.0393 | -0.025922 | 0.039603 |
1970-03-31 | -0.0232 | 0.0399 | -0.018023 | 0.043425 |
1970-04-30 | -0.0615 | 0.0618 | -0.056743 | 0.064584 |
1970-05-31 | -0.0459 | 0.0333 | -0.047235 | 0.036884 |
Step 10. Compare using correlation#
print(stats.pearsonr(ff_compare_post_1970["smb_actual"], ff_compare_post_1970["smb_manual"]))
print(stats.pearsonr(ff_compare_post_1970["hml_actual"], ff_compare_post_1970["hml_manual"]))
PearsonRResult(statistic=0.995884357414106, pvalue=0.0)
PearsonRResult(statistic=0.9837661086268665, pvalue=0.0)
ff_compare.tail()
smb_actual | hml_actual | smb_manual | hml_manual | |
---|---|---|---|---|
date | ||||
2023-08-31 | -0.0320 | -0.0108 | -0.036583 | -0.005446 |
2023-09-30 | -0.0249 | 0.0145 | -0.025185 | 0.013751 |
2023-10-31 | -0.0388 | 0.0019 | -0.044499 | 0.005002 |
2023-11-30 | -0.0003 | 0.0166 | 0.001777 | -0.001191 |
2023-12-31 | 0.0636 | 0.0492 | 0.072666 | 0.026592 |
Step 11. Plot Factors to Compare#
plt.figure(figsize=(16, 12))
plt.suptitle("Manually-Created Factors vs Ken French Data Library", fontsize=20)
ax1 = plt.subplot(211)
ax1.set_title("SMB", fontsize=15)
ax1.set_xlim([datetime.datetime(1961, 1, 1), datetime.datetime(2022, 6, 30)])
ax1.plot(ff_compare["smb_actual"], "r--", ff_compare["smb_manual"], "b-")
ax1.legend(("smb_actual", "smb_manual"), loc="upper right", shadow=True)
ax2 = plt.subplot(212)
ax2.set_title("HML", fontsize=15)
ax2.plot(ff_compare["hml_actual"], "r--", ff_compare["hml_manual"], "b-")
ax2.set_xlim([datetime.datetime(1961, 1, 1), datetime.datetime(2022, 6, 30)])
ax2.legend(("hml_actual", "hml_manual"), loc="upper right", shadow=True)
plt.subplots_adjust(top=0.92, hspace=0.2)
