Showing posts with label Academic Research. Show all posts
Showing posts with label Academic Research. Show all posts

Tuesday, May 3, 2011

FMA Decisions Are Out!

I just heard from a coauthor - we got a paper accepted at the Denver FMA meeting in October. The idea resulted from taking an idea we'd been working on and applying it to another data set we had available.

It's funny - we submitted two papers: this one was an early version, and the other was pretty much finished. However, to be fair, the results on this one were more interesting. And since we'd already gotten one paper on the program, we were actually glad we got the second one rejected - doing two papers at a conference means there's less time for catching up with friends.

This tale of two papers reminds me of a piece I read a while back (unfortunately, I can't recall its title). It discussed how there's a trade-off in research between "newness" and "required rigor". In other words, if you're working on a topic that's been done to death (e.g. capital structure or dividend policy), you'll be asked to do robustness tests out the yazoo. On the other hand, if it's a more novel idea, there's a lower bar on the rigor side, because the "newness" factor gets you some slack on the rigor side. .

In general, however, the "rigor" bar has been ratcheting up for the last 20-30 years, regardless of the "newness" factor. To see this, realize that the average length of a Journal of Finance article in the early 80s was something like l6 pages - now it's more like 30-40. As further (anecdotal) evidence, a friend of mine had a paper published on long-run returns around some types of mergers in the Journal of Banking and Finance about 9 years back. They made him calculate the returns FIVE different ways.

In any event, to make a long story short, I'm hoping we got accepted at FMA because the reviewers though our paper was a good, new idea.

But it's probably because we got lucky.

But either way, we'll take it - see you in Denver!

Thursday, January 20, 2011

Englisizing The Paper

I'm in editing mode. Lately, I've been working with a couple of coauthors who are very, very good theorists. Their game theory/math chops are so much better than mine that I try not to discuss it. In addition, one of them is very connected with the top people in the accounting area (he was the other's chair, which is how we met).

They just dropped about a 40 page current version of a paper we've been working on for quite a while. The logic of the paper flows soundly, and the empirics are solid.

Unfortunately, neither of my coauthors has English as a mother tongue. So, I'm in charge of "Englishizing" the paper.

Oh my!

Update: I may have given the wrong impression (at least, based on a comment by Bob Jensen). My colleague and I have been working on this paper for quite some time, and we've all been involved with most parts of the paper (with the exception of the game-theoretic part, which is admittedly not my strength). My contributions have been primarily in the designing of the tests (my colleague is a game theorist, not an empiricist) and in the final editing of the paper.
Since I did my early education at Our Lady of the Bleeding Knuckles Elementary School (and yes, they did use the curtain rod), I received pretty good training in the fundamentals of what I call the "micro" part of writing.

Monday, January 17, 2011

I Don't Work Well Under Deadlines

"I don't work well under deadlines. But without them, I don't work at all."

I heard that line about 15 years ago from my dissertation chair, and it's stuck with me.

In the last three weeks, I've sent off three papers to conferences.
  • For one, I had to edit abut 40 pages or so. My coauthor (who is a theorist and non0native speaker) wrote the first draft. It's a good idea, but the writing needed a lot of work. We sent it off to the American Accounting Association Meeting. We also sent it off to a regional conference, because the school whee one of my coauthors works at counts these things.
  • A second is a paper that's been floating around for a while. It needed one final going over before submitting to a journal. We realized a week ago that the initial version had been submitted to the Financial Management Association (FMA) conference and rejected a year ago. This version has a lot more stuff in it and is much more polished. However, I hadn't gotten around to making the last few changes to it. So, I finished them and sent it off to the FMA conference. Now it just needs a little more work and we can submit it to a journal.
  • A third piece involved a paper we'd talked about with a graduate student. It involves a cross-breeding of his dissertation data and a previous paper done by the coauthor from the paper above. We got the dataset from the student with ten days to go before the deadline, and I started writing things up while my other coauthor started the data analysis. Somehow, we produced a 30 page paper with decent results in that time. It also got submitted to FMA. It still needs work, but I think we can have a journal-submittable version in a month or so.
Final tally - three papers submitted to 4 conferences in a span of about 2 weeks, followed by 12 hours of solid sleep.

Now I need to finish edits on a short piece that has a conditional acceptance. There's no deadline looming, but I'm in the groove from the last couple of weeks, so I'm here at the College on MLK day.

Wednesday, September 8, 2010

Spam, Spam, Spam, Spam

A couple of years back, I was on the train (coming back from a consulting gig), and, being an extrovert, I started talking with a guy sitting next to me. He was a "stock tout". In other words, he was one of those guys who sent out emails pushing one stock or another. He claimed it was a pretty profitable business.

Now I have some evidence backing him up.
Here's a pretty interesting piece on the market effects of internet stock spam spam. A couple of years ago, Well, Frieder and Zittrain did a study titled Spam Works: Evidence from Stock Touts and Corresponding Market Activity. They found that on spammers "touting" (i.e. pushing) a stock has some pretty significant effects on the touted stock's price and trading volume. Here's the abstract (emphasis mine):
We assess the impact of spam that touts stocks upon the trading activity of those stocks and sketch how profitable such spamming might be for spammers and how harmful it is to those who heed advice in stock-touting e-mails. We find convincing evidence that stock prices are being manipulated through spam. We suggest that the effectiveness of spammed stock touting calls into question prevailing models of securities regulation that rely principally on the proper labeling of information and disclosure of conflicts of interest as means of protecting consumers, and we propose several regulatory and industry interventions.

Based on a large sample of touted stocks listed on the Pink Sheets quotation system and a large sample of spam emails touting stocks, we find that stocks experience a significantly positive return on days prior to heavy touting via spam. Volume of trading responds positively and significantly to heavy touting. For a stock that is touted at some point during our sample period, the probability of it being the most actively traded stock in our sample jumps from 4% on a day when there is no touting activity to 70% on a day when there is touting activity. Returns in the days following touting are significantly negative. The evidence accords with a hypothesis that spammers "buy low and spam high," purchasing penny stocks with comparatively low liquidity, then touting them - perhaps immediately after an independently occurring upward tick in price, or after having caused the uptick themselves by engaging in preparatory purchasing - in order to increase or maintain trading activity and price enough to unload their positions at a profit. We find that prolific spamming greatly affects the trading volume of a targeted stock, drumming up buyers to prevent the spammer's initial selling from depressing the stock's price. Subsequent selling by the spammer (or others) while this buying pressure subsides results in negative returns following touting. Before brokerage fees, the average investor who buys a stock on the day it is most heavily touted and sells it 2 days after the touting ends will lose close to 5.5%. For those touted stocks with above-average levels of touting, a spammer who buys on the day before unleashing touts and sells on the day his or her touting is the heaviest, on average, will earn 4.29% before transaction costs. The underlying data and interactive charts showing price and volume changes are also made available.
If you're not convinced (or even if you are), I have a couple of names of people who are related to the former finance minister of Nigeria who need your help getting money out of the country (and are willing to share the profits with you). I'll give them to you for a small finder's fee. Just send me the routing number on your bank account and I'll take care of it electronically.

HT: The Psi-Fi Blog

Of course, with a title like that, it was inevitable


Thursday, July 29, 2010

I'm Still Alive (but things might make me laugh myself to death)

Because of the end of the semester, some heath issues (since resolved), working on research, and being a bit burned, I haven't posted anything for several months.

But like one of the best business tacticians of our times says, "Just when I thought I was out... they pull me back in". So I guess this is my "welcome back" post.

I just received a referee's report that made me laugh at its awesomeness. First a bit of background: I sent a paper to a lower-tier journal back in June of 2008. There was no response for over a year, so I sent several emails (and voice mails) to the editor with no response. Finally, getting fed up, back in November, I sent him an email (and follow-up voicemail) asking the editor to withdraw the paper. We subsequently got a revise and resubmit another journal.

Then today I get this from the original journal (i.e. the where I'd withdrawn the paper long ago):
RE: XXXXX and the use of XXX

I have now received a report on your paper in which the referee makes a number of recommendations for improvement. Unfortunately I am unable to accept the paper for publication in its current form. However I would be happy to reconsider the paper if you were to revise it along the lines suggested by the referee. I look forward to your resubmission.

The reviewer's comments are given below.

Referee: 1
Comments to the Author

This paper examines the relationship between XXX and XXX. However, Pearson correlation coefficient that this paper uses is very ordinary. And this often does not measure the non-linear relationship for variables. In addition, the paper does not make the necessary statistical test and analysis to the studying results.
Note: emphasis is mine, and I only changed the relatively few words necessary to protect the guilty.

Yes, that is the sum total of the referee's report. I'd always heard that the main difference between "good" journals and "weak" ones wasn't so much the mean quality of reviewer but the variance. Now I have my own data point.

Next time I will make sure to use "extraordinary" Pearson correlation coefficients and "make the necessary statistical test and analysis to the studying result".

update: I told a friend and former classmate of mine about this, and he suggested that "Outstandingly Bad Referee Reports" would make for a fun session topic at a conference- particularly if we had a journal editor select the panel members. However, he suggested that the entertainment value would be much better if you could somehow ensure that (unbeknownst to each other) both the recipients of the reports and the originators were both on the panel).

But that would be wrong. Funny, but wrong.

Thursday, December 24, 2009

As The Semester Winds Down

Since Unknown University starts (and ends) their fall semester a bit late, I'm just putting the finishing touches on my grades - two classes down and one (the smallest, luckily) to go.

It's been a tough semester - three preps (for the non academics among you, a prep is a unique class - so three preps means I taught three different classes), and one was a brand new one (Fixed Income) for me. I took it because the senior faculty who regularly teaches it took a sabbatical, and it's required of all our students. The new prep took far more time than I'd thought, so I didn't get as much research done as I'd hoped.

The winter break will be dedicated first to getting two papers completed and submitted to journals. I let things slide a bit these last few years due to the Unknown Son's illness, so I'm glad to be finally working on things that have the potential to go to decent journals - these two will likely be sent to Financial Management and Journal of Banking and Finance (two very solid journals). As for the other things I'm working on, one should go to to a solid accounting journal (JAAF), another to Journal of Futures or Journal of Derivatives, and another will be targeted to the Financial Analyst's Journal. I'm also working on a piece with a PhD student that will hopefully be finished in time to submit to the FMA annual meetings.

Somewhere in there, I'll also make some minor changes to my class (it's the same class I taught for the first time this past semester, so it's in pretty good shape). It shouldn't take more than a day or two to make the changes, since I prepped pretty thoroughly for it last time.

It's an ambitious schedule, but three of the pieces use the same data set, and a fourth is mostly done. With a bit of hard work, I should have a very productive Winter break. So, to all of my coauthors who read the blog: take heart - things will be done soon enough.

On a more somber note, please keep Mark Bertus and his family in your prayers. He's a fairly young faculty member at Auburn, with several young children. He's in the final stages of colon cancer, and is a remarkable guy. He'll leave an amazing legacy of memories to those of us who've had the privilege of knowing him. You can read the blog his wife has been maintaining to keep everyone informed about the illness here.

Mark's journey reminds me of something Steve Brown (a radio preacher) once said. It's something to the extent of "Whenever a pagan gets cancer, God allows a Christian to get cancer
so that the world will see the difference in how Christians deal with it." Depending on your beliefs, that might or might not sit all that well with you. But as you read his blog, you'll see that it definitely applies here.

To all who're reading this - Have a Merry Christmas (or whatever holiday you choose to celebrate).

Tuesday, September 1, 2009

The Summer Winds Down

It's been a busy week here in Unknownville. Unknown University starts up next week (we start later than most), so we've had a rash (or is that a plague?) of meetings. I'm still juggling several papers (writing a lit review for one, doing data work for another, and some polishing/editing for a third) and sequentially disappointing my coauthors.

Ah well - them's the breaks. But I have to be nice, since coauthors on each paper read the blog. So fear not, coauthors - my parts will be done in good time.

Along those lines, I just received a bunch of results from one coauthor, some of which are pretty interesting. It's an area that I had an unsuccessful paper in several years ago, and she usues s new and difficult data set that allows us to revisit the topic in a very new way. WE've got a good story and good results, and it'll gp to the head of the pile, since we're sending the paper to an upcoming conference (the Eastern Finance Association annual meeting) for which the deadline is next week. I hope it gets accepted since the Unknown Wife and I plan on making it a little vacation (she's neve been to Miami). We're pretty confident - its a good idea,goood data, and believable results (And we know the program chair).

After all, that's potentially one of the perks of academia - you can sometimes have the university partially fund your vacations by choosing your conferences wisely.

Finally, I just got an email telling me I've won 12,841,340 Euros in an inernational lottery that I don't recall entering. I have to share it with 14,000 winners, but it'll give my students something to calculate when I cover foreign exchange rates.

UnknwonDaughter is now back in scnool and once agaion well ahead of her classmates.. And Unknown Baby boy continues t alternately make us laugh and make us gag as he exceeds manufacturers capacit on his diapers (or as we call them "Code Brown!). Ah well - the wages fo hchild rearing.

Peace

-UP

Friday, August 28, 2009

The Difficulty of Measuring the Gains To Fundamental Research

Here's a paper by Bradford Cornell that I've had in my in box for a while. It's titled "Investment Research: How Much Is Enough?" Here's the abstract
Aside from the decision to enter the equity market, the most fundamental question an investor faces is whether to passively hold the market portfolio or to do investment research. This thesis of this paper is that there is no scientifically reliable procedure available which can be applied to estimate the marginal product of investment research. In light of this imprecision, investors become forced to rely on some combination of judgment, gut instinct, and marketing imperatives to determine both the research approaches they employ and the capital they allocate to each approach. However, decisions based on such nebulous criteria are fragile and subject to dramatic revision in the face of market movements. These revisions, in turn, can exacerbate movements in asset prices.
I raises some interesting issues about the difficulties in measuring gains to fundamental research. To name a few:
  • The difficulty in measuring "abnormal" performance", given the stochastic (i.e. random) nature of stock returns
  • The time-varying nature of any possible gains to analysis (funds and strategies change over time).
  • Given the needs for sample size and duration necessary to get high levels of statistical significance, most findings are of pretty low confidence
  • The ad hoc nature of many analysis strategies and the role that judgement plays
It's worth reading, and give some good points for discussion in a class module on efficient markets (and the related topic of "anomalies" like the size and value effects). You can read the working paper on SSRN here

Friday, August 7, 2009

Research Love/Hate

I love doing research. Actually, I like finding out new stuff. But sometimes the research process makes me rue the fact that I work on a dry campus.

Like this week.

I've been working on a paper where I needed to update the data on. Since the latest version was a rush job put together for a conference (yes - this happens a lot), I decided to go back and check every line of my program (always a good thing to do). I also wanted to do the anal-retentive (I know, that's redundant. - except in research, where it's expected) thing where I can relate what happens to my sample at each filtering step. While doing this, I found out that I'd used the wrong data code for one of my variables - one of my MAIN variables. So, the whole data set was, in a word, crap.

After taking a deep breath, I made the corrections and redid most of the analysis. Luckily, the results still held, with minor modifications.

Then I discovered a minor discrepancy in the number of observations at one step. It's likely not very important at all. But I need to track it down before I go further. So, since my coauthor reads the blog, he'll just have to wait another day or so. But I'm getting close, so I should be able to finish my part of the work and ship it off to my coauthors in another day or two.

Then a coauthor on another paper told me that she'd found an error she made in her coding. In this case, when she found and corrected her error, it quadrupled our sample size. If you're an empiricist, you know how much an increase from about 90 observations to about 400 means. If not, let's just say it's a big deal to the alpha nerds among us (and that description applies to most of my friends).

So, like most days in the research salt mines, there's some good and some bad.

Now about that "dry campus" thing...

Wednesday, August 5, 2009

Advice From A Journal Editor

Here's a very interesting and informative piece titled "Edifying Editing" by R. Preston McAfee (former co-editor of AER and editor of Economic Inquiry). It's not entirely applicable to finance because he's an econ guy. But there is a great deal of similarity between the fields. Here are a few things that stuck with me:
  1. He cites a paper by Dan Hamermesh (1994), who discovered that, conditional on not receiving a report in 3 months, the expected waiting time was a year. So, if you want to endear yourself to editors and you're a reviewer, get stuff done quickly. I know that the longer I wait on a referee report, the less I feel like punching it out.
  2. Around 25% of the to AER during his tenure were rejected due to poor execution. That is, the paper represented a good start on an article worthy topic, but provided too little for the audience. I recently was discussing a former student (and current coauthor) with a friend of mine who edits a pretty good journal. His comment was that my friend does good work, but "needs to finish his papers". Unfortunately, my friend often sends papers out to journals to get feedback from referees. That's what colleagues are for.
  3. He feels like a a surprising number of papers provide no meaningful conclusion. Don;t merely reiterate your introduction in the conclusion. The introduction is to motivate a problem and summarize your results, and the conclusion is your opportunity to tie things together and make some parting shots.
  4. He feels that submitting a paper where the editor has deep expertise usually produces a higher bar but less variance in the evaluation.
All in all a very worthwhile read. So read it here.

HT: Marginal Revolution

Wednesday, July 22, 2009

High Impact Research

Most of what I do is not "high impact." Ah well- I yam what I yam.

Sunday, July 19, 2009

"Garbage Research" and The Equity Risk Premium

Instead of the CCAPM (Consumption CAPM), we now have the GCAPM (Garbage CAPM). Alexi Savov (graduate student at U of Chicago) finds that he can explain much more of the Equity Risk Premium using aggregate garbage production than he can using National Income and Product Account (NIPA) data. Here's the logic behind his research (from Friday's Wall Street Journal article titled "Using Garbage to Measure Consumption"):
In theory, one way to explain the premium would be to look at consumption, a broad measure of wealth. People should demand a premium from an investment that goes down when consumption goes down. That’s because the alternative — bonds — hold on to their value when consumption declines. Another way to put it: When you are making lots of garbage, you are rich. When you stop making garbage, you are poor. Unlike bonds, which continue to pay out whether you produce lots of garbage (and are rich) or not, stocks are likely to lose their value during bad times. Therefore, investors should want a large reward for putting their money in something whose value decreases at the same time as their overall wealth decreases.
Unfortunately, the data typically used to measure consumption (the US Government's figures for personal expenditure on nondurable goods and services category in the National Income and Product Account) don't have a lot of variation. So, they don't work very well as an explanatory variable. Savov finds that whe he uses EPA records on aggregate garbage production, they're exhibit a correlation with equity returns that are twice as high as the NIPA/Equity returs correlations. Here's the abstract of his paper (downloadable from the SSRN):
A new measure of consumption -- garbage -- is more volatile and more correlated with stocks than the standard measure, NIPA consumption expenditure. A garbage-based CCAPM matches the U.S. equity premium with relative risk aversion of 17 versus 81 and evades the joint equity premium-risk-free rate puzzle. These results carry through to European data. In a cross section of size, value, and industry portfolios, garbage growth is priced and drives out NIPA expenditure growth.
Read the whole thing here.

Friday, July 10, 2009

Getting Your Data Straight

I've made progress on the paper I'm working on. Unfortunately, this week has been a good illustration of a quote from McCloskey: I believe it went something like "90% of writing is getting your thoughts straight, and 90% of empirical work is getting your data straight."

Unfortunately, my data wasn't straight - I realized that I had used the wrong data code (a certain type of dividend distribution) from CRSP. So, my previous analysis was basically crap (that's a technical term for the unitiated) and had to be redone using the proper data set.

Luckily, it looks like my primary results after using the proper code, but with a few minor changes. For now, I'm still doing the preliminary descriptive stuff. Since I did the initial version of the paper in a hurry (hey - it was a conference deadline), I took a few shortcuts. This time, I'm going back to step 1 and going over every line of code, and (just as important), making sure I know how the sample changes at each point. As a result, I'm much more confident with my data this time around.

But doing the descriptive statistics is still (to me) about the most boring part of the paper. Still, it's gotta be done.

Tuesday, July 7, 2009

Momentum Effects and Firm Fundamentals

The more Long Chen's work I read, the more I like it. I recently mentioned one of his pieces on a new 3-factor model. Here's another, on the momentum effect, titled "Myopic Extrapolation, Price Momentum, and Price Reversal." In it, he links the well-known momentum effect to patterns in firm fundamentals. Here's the abstract:
The momentum profits are realized through price adjustments reflecting shocks to firm fundamentals after portfolio formation. In particular, there is a consistent cross - sectional trend, from short-term momentum to long-term reversal, that happens to earnings shocks, to revisions to expected future cash flows at all horizons, and to prices. The evidence suggests that investors myopically extrapolate current earnings shocks as if they were long lasting, which are then incorporated into prices and cash flow forecasts. Accordingly, the realized momentum profits can be completely explained by the cross - sectional variation of contemporaneous earnings shocks or revisions to future cash flows. Importantly, these cash flow variables dominate the lagged returns in explaining the realized momentum profits. As a result, the realized momentum profits represent cash flow news that has little to do with the ex ante expected returns. In fact, the ex ante expected momentum profits are significantly negative.
So, in essence, he finds that investors ignore mean-reverting patterns in firm earnings, and over-weight recent earnings shocks.

Very nice.

On an unrelated note, the Unknown Family will be traveling the next few days for a family reunion in West Virginia (the Unknown Wife's father grew up their, and that fork in the family tree has a get-together every year). So, unless I schedule a few pieces to post automatically, posting will likely be slim for the next few days.

Monday, July 6, 2009

Updating a Dataset Always Takes Longer Than Expected

I'm still working on updating my data. As usual, what I thought would be a "simple" three to four-day jobhas stretched out to almost two weeks of work. At least I'm not this person.

Wednesday, July 1, 2009

Arrrrggghh! SAS is Evil!

Just another day (or two) of torturing data. Like I mentioned a couple of days ago, a week back I decided to update a data set to include the last year or so of data (the data sources I use were recently updated). Like most "simple" jobs, it's turned out to be much more of a hairball than I expected. Although the program I used was fairly simple to rewrite, I realized that I had to update not one, not tWo, but THREE datasets in order to bring everything up to the present.

Caution: SAS Geekspeak ahead

One of the data sets is pretty large (it was about 70 gigabytes, but with the updates and indexing I've done, it's almost 100 gig). So, adding the new data and checking it took quite a while (no matter how efficiently you code things, SAS simply takes a long time to read a 70 gigabyte file). I thought I had everything done except for the final step. Unfortunately, the program kept crashing due to "insufficient resources."

For the unitiated, when manipulating data (sorting, intermediate steps on SQL select statements, etc...) SAS sets up temporary ("scratch") files. They're supposed to be released when SAS terminates, but unfortunately, my system wasn't doing that. So, I had over 180 gigabytes of temporary files clogging up my hard drive. This means that there wasn't enough disk space on my 250 gigabyte drive for SAS to manipulate the large files I'm using.

Of course, I only realized this when my program crashed AFTER EIGHT HOURS OF RUNNING! TWICE!

I've now manually deleted all the temporary files, and I'm running the program overnight to see if this fixes the problem.

Ah well - if it was easy, anyone could do it.

update (next morning): Phew! It ran - it seems the unreleased temporary files were the issue. On to the next problem.

Tuesday, June 30, 2009

A Simple (and Impressive) New Three Factor Return Model

First, a little background on "factor models": The CAPM model for estimating expected returns is the oldest and most widely know of all finance models. In it, exposure to systematic risk (i.e. beta) is only factor that gets "priced" (i.e. that's related to expected returns).

In 1993, Fama and French showed that a three factor model (the CAPM market factor plus a size factor and a value/growth factor), did a much better job of explaining cross-sectional returns when compard to the "plain vanilla" CAPM.

Since the FF model became popular, a number of studies have come out that identify other factors that seem to be associated with subsequent returns, such as momentum (Jegadeesh and Titman, 1993), distress (Campbell, Hilscher, and Szilagyi, 2008), stock issues (Fama and French, 2008) and asset growth (Cooper, Gulen, and Schill, 2008).

Now, on to the meat of this post - another factor model. This one is based on q-theory (i.e. on the marginal productivity of a firm's investments). Long Chen and Lu Zhang (from Washington University and Michigan, respectively) recently published a paper "A Better Three-Factor Model That Explains More Anomalies", in the Journal of Finance. They propose a three-factor model", with the three factors being the aggregate returns on the market, the firm's asset-scaled investments, and the firm's return on assets). Their model significantly outperforms the Fama-French (FF) model in explaining stock returns, does a better job (relative to FF) at explaining the size, momentum, and financial distress effects (i.e. you don't need to add additional factors for these effects), and does about as well as FF in capturing the Value (i.e. Book/Market) effect. Here's a taste of their results:
  • The average return to the investment factor (i.e. the the difference between the low and high investment firms) is 0.43% per month over the 1972-2006 sample period. When measured only among small firms, the return difference between low and high investment firms is about 26% annually)
  • The average return to the ROA factor (the difference between returns to the firms with the lowest and highest ROA) is 0.96% per month over the sample period (with a high/low spread of about 26% for the smallest firms).
  • The differences in high vs. low portfolios persist (albeit in smaller magnitudes) after controlling for Fama-French and momentum factors.
It's definitely worth a read (in fact, it'll be on the reading list for my student-managed fund class). You can find an ungated version of the paper on SSRN here.

HT: CXO Advisory Group

Thursday, May 28, 2009

SAS is the Devil

I've spent 10 hours over the last two days debugging a SAS program I wrote about 2 months ago. It was written for a paper that's coming along nicely, but I haven't revisited the data (or the program) for a while.

Unfortunately, I didn't document the program very well.

I thought, "well, all I have to do is run this one test. That shouldn't take long."

Cue Jaws theme song and commence profanity
...

/profanity

When will I learn?

Now that I've gotten down to a part of the program that has to run for a while (merging two VERY large (> 50 gig) datasets), I can take a break.

update: It finally finished running. It "only" took 14 hours (yes, that's right, 14 hours). And that's after having used every trick I knew to make it more efficient.

Thursday, May 7, 2009

A Good Conference, Followed By More Crazy

I thought I'd put a few impressions of the EFA conference from this past week. While pretty short, it was a very good time: catching up with old friends, making some new ones, getting a lot of work done, and SLEEEEEP!

Because of all the other stuff going on in the Unknown Household, I was only able to get away for a day, so I took an early Friday flight to Baltimore, followed by the train to Washington. I arrived at the conference hotel just before lunch. Between 11:00 Friday and 1:30 Saturday (when I took the hotel shuttle to the airport for my return flight) I
  • Had two papers presented (both were presented by coauthors), and received good comments on both. In both cases, the papers were well received by the discussant and the audience.
  • Met with coauthors on three papers (the two mentioned above and a third one that's also coming along nicely). On two of the papers, we just talked briefly to discuss what needs to be done next to get them out the door. On the third paper, my coauthor and I spent about a hour applying various methods of statistical torture to the data, yielding some pretty nice confessions (oops! - that would be "results").
  • Arranged to present a paper at my undergraduate Alma mater in the fall. I know a good number of the faculty, none of which were there back in the dark ages when I was an undergrad. We've been talking about my coming to present some of my work for some time, and it looks like it'll finally happen
  • Talked with another potential coauthor about combining some of my data with his methodology.
  • Most importantly, I went to bed by 9:30 and (since there was no-one clamoring for a 3 a.m. feeding) slept almost 9 1/2 hours without interruption. Woo Hoo!
Just to show that nothing ever changes for long, when I got back, I called the Unknown Wife, and she said it was all right to go to a book store and veg a bit (it's one of my favorite ways to decompress).

Of course, this was followed by a second phone call an hour later telling me that Unknown Son was getting nosebleeds. We called the on-call oncologist and was informed that he was likely low on platelets (they are the clotting factor in the blood, and often get depressed following chemotherapy). So, it was a quick drive home, followed by a trip to the ER, followed by an overnight stay at the hospital so he could get a platelet transfusion (since it was the weekend, we couldn't do it on an outpatient basis.).

He was discharged the next morning, and all's been quiet since.

For now.

Saturday, March 7, 2009

This Is Pretty Much True For Most First Drafts

This is true for my first drafts, and even more so for most grad students.

Sad, but true.