Sunday, February 8, 2015

Economies of Scale Versus the Learning Curve



As Besank et al note, there is a difference between economies of scale and the learning curve (81). With the learning curve, a company can do something at a less expensive unit cost after time, where economies of scale are lower costs the more you do.
            These two concepts are independent of each other. Whether one is more important than the other seems to rely on the level of complexity within the task of a company. If something is simple, but takes a lot of capital to be done, economies of scale will take place. An example of a simple, capital-intensive business would be the operation of a strip mine. Once a business has the plot of land in which it wants to mine, all it needs is more equipment and relatively easily trained operators for that equipment. The more equipment, the more the company can carry out of the mine, and the cheaper each pound of rock is to take out.
            Conversely, throwing capital at different problems is not always the answer. Sometimes human capital is the most important element of a business. Often a service provider can be an example, but it exists in skilled manufacturing. For example, some of the most expensive watches in the world are still made by hand in Switzerland. These watches are miracles of the jeweler’s craft, even if the best made watch will never keep time as well as a twenty-dollar digital watch. The companies that make these watches position their wares as luxury goods, so that they are not in the same market as the cheap digital timepiece. They are made at a small scale for discerning buyers. Here is where the learning curve is important. Well-trained watchmakers will be able to make more watches faster and with higher accuracy. This allows a company that employs these artisans to have lower costs per unit. It also discourages new entrants to the market, keeping the retail value of the watches they sale inflated.
            With the two examples, it becomes possible to say that both the learning curve and economies of scare are important to the financial health of companies. The important difference is the human factor between which one will be more important.

          References

Besanko, D., Dranove, D., Shanley, M., & Schaefer, S. (2013). Economics of Strategy (6th ed.). New York: Wiley.

Merger Happy: Lessons of ExxonMobil



            Standard Oil, Rockefeller’s oil conglomerate was broken up in 1911, into 34 separate companies (“About Us”). Two of those companies were the forerunners to the companies that came to be known as Exxon and Mobile. After almost a century apart, they joined back together on November 30, 1999 after over 18 months of talks (“About Us”). The merger did not happen in a vacuum. The head of the Bureau of competition at the Federal Trade Commission at the the time noted that there were several other contemporary mergers: “In recent months, we have seen the merger of BP and Amoco - which was the largest industrial merger in history until Exxon/Mobil was announced --and the combination of the refining and marketing businesses of Shell, Texaco and Star Enterprises to create the largest refining and marketing company in the United States” (“Remarks”).
Exxon was the larger of the companies in a merger worth $75.3 billion dollars, larger again by almost half of the merger between BP and Amoco (“12 Years Later”). Both companies had full control of their revenue streams, from extraction to refining to retail sales, though they often had partnerships. This merger of equals on a horizontal basis allowed them to create corporate synergies totaling up to $3.8 billion in pretax savings (“12 Years Later”)
The merger was not without its critics. Public Citizen, and advocacy group, put out a list of the things to worry about with the merger, including “If Exxon-Mobil were a nation it would have the 18th largest economy in the world larger than Denmark, Finland, Austria, and Greece,” and “Exxon-Mobil, with more than 50 refineries in a dozen countries, will be the most powerful oil refiner in the world. This position will allow Exxon-Mobil to shift production to the cheapest, most worker-unfriendly environment” (“10 Facts”).
The company countered that the market had changed. By their measure, the Standard Oil giant had over 80% of the market for oil, whereas a combination of Exxon and Mobile would only control 11% of the market (“Exxon, Mobile Divestures”).  In the end, the regulatory bodies were worried about monopoly conditions one the retail side, especially in the north east, where both companies had been originally based after the split up of Standard Oil. To get approval from the FTC, they had to sell 1800 gas stations to outside firms (“Deal Nears OK)”.
The result for consumers is hard to ferret out. The average gas price for consumers the month the merger was announced was $0.873 a gallon. A year later, it had risen to $1.124 nationwide (“U.S. Total Gasoline”). It is hard to tell how much of that increase was from the merger of the two companies and how much from other contemporary economic effects. Since gas prices eventually came back close to the pre-merge low before taking off based on large geopolitical issues, it looks as if there was little overall consumer effect in the long run. As for shareholders, it looks as if the merger was a wash. For the past fifteen years, the total return of the XOM stock has moved in tandem with an index of other oil company stocks, but has beat the S&P 500 over that time by 2.75% (Morningstar). If there had been true efficiencies to work out through scale, it would be expected that the value of the combined companies would surpass an index of comparable publically traded companies.  That there was little effect overall for consumers shows that the merger was not necessary, but it did not hurt them. Perhaps the FTC-mandated divesture was enough to make the long-run monopoly concerns not an issue. That there were not greater gains to scale in terms of returns in the equities market should show future managers that merging may bring headlines, but not necessarily growth.

References

Baer, William J.. (1999). Statement of the Federal Trade Commission. Federal Trade Commission. Retrieved from http://www.ftc.gov/sites/default/files/documents/public_statements/prepared-statement-federal-trade-commission-exxon/mobil-merger/exxonmobiltestimony.pdf

CNN Money (1999, November 23). Exxon-Mobil deal nears OK. CNN Money. Retrieved from http://money.cnn.com/1999/11/23/deals/exxon/

Corcoran, Gregory (2010, November 30). Exxon-Mobil 12 Years Later: Archetype of a Successful Deal. Wall Street Journal. Retrieved From http://blogs.wsj.com/deals/2010/11/30/exxon-mobil-12-years-later-archetype-of-a-successful-deal/

ExxonMobil. (2015). Our History. ExxonMobil.  Retrieved from http://corporate.exxonmobil.com/en/company/about-us/history/overview

Morningstar. (2015). Exxon Mobil Corporation XOM . Morningstar. Retrieved from http://performance.morningstar.com/stock/performance-return.action?t=XOM&region=usa&culture=en-US

Public Citizen. (2015). 10 Facts About the Exxon-Mobil Merger. Public Citizen. Retrieved From http://www.citizen.org/cmep/article_redirect.cfm?ID=6307

U. S. Enegry Information Administration. (2015, February 2). Petroleum & Other Liquids. UEIA. Retrieved from http://www.eia.gov/dnav/pet/hist/LeafHandler.ashx?n=PET&s=EMA_EPM0_PTC_NUS_DPG&f=M

Wilke, J. and Liesman, S. (1999, January 20). Exxon, Mobil Divestitures Are Seen To Obtain U.S. Approval of Merger. Wall Street Journal. Retrieved from http://www.wsj.com/articles/SB916791504585647500

Sunday, January 18, 2015

Levels of Control in Data Gathering



In scientific data gathering, there are three different ways to find the data needed to look at the relationships being studied. These three methods will tell you different things, and have their own strengths and weaknesses. The three methods are the experimental, the quasi experimental, and the correlation method.
In the experimental method, the scientist is trying to isolate causation from a single variable.  To see if a particular variable has a consequence the scientist must hold all other variable to be the same and only change the one thing that they are studying. To make sure that the experiment is being done correctly, the choice of who is exposed to the independent variable must be at random (Salkind 2014 p. 8). For example, say that the scientist thinks that college men wearing a baseball cap will be able to run faster. The scientist will then distribute baseball caps to the study group and then measure the running speeds of all the participants. If the study finds that the average speed of the baseball cap wears was in fact faster than the average speed of the non-cap wearers, then the hypothesis is confirmed.
Not all variables are as easily tested as the question of speed of college-age men and baseball cap wearing. Sometimes what is to be measured is not so easy to control.  If the experiment designer cannot pick who receives a variable, then there is a measure of control lost. This becomes part of a quasi-experimental method (Salkind 2014 p. 9). An example where a quasi-experimental method would be used is one where a scientist was curious at who was better at chess, all other things being equal, left-handed people or right-handed people. Since nature has already chosen who will be left-handed and who will be right handed, the element of randomness has been taken away from the experimenter. The ultimate results of the quasi-experiment may be less certain than a strictly experimental method because there may be other variables that the left-handers possess other than their dominate hand that may skew the results.
The final method for looking at relationships between two variables is the correlational method. In this method, there is no experiment run, but the scientist looks at two sets of data to see if there is a relationship between them. Does one go up while at the same time the other goes down? Alternatively, do they move in tandem together? If they do either one, then the indicators are said to be correlated. The problem with looking for correlation is that scientist cannot tell if there is a direct causal relationship (Salkind 2014 p. 10). Say a scientist can look at the sales of Happy Meals in America as well as the average weight of American children. If the scientist sees that both variables increased over the same time, then a correlation can be said to exist. The issue is that there is no way to say directly what caused what. Did children gain weight because they were eating too many Happy Meals, or did already-obese children demand more Happy Meals?
The overall result is that the more control a scientist has over the independent variables that they are studying, the more certain they can be with the validity of their results. In the use of data, more control is the desired starting point, but it may not always be possible to attain. That is why the other options exist.

Saturday, January 17, 2015

Bertrand Russell Deserves a Seat at Your Table: On Sceptical Essays

I grabbed this book because it was in the Journal’s recommend books for year-end last year. I had read his “Why I am not a Christian,” and was aware of Russell as a philosopher and mathematician. I did not know he was such a clear writer. I have to respect a free  thinking, socialist, atheist from 100 years ago who was not afraid to follow the strength of his convictions even though they led him against the grain. He lost potential jobs, and went to jail for his beliefs. Maybe he was never in any real danger, but I don’t know – still brave.

Reading this book made me think of that hypothetical situation where you can have a dinner with anyone you want, living or dead. I think I’d have Russell at my table. His writing, reading it now, sounds contemporary.  These essays, for the most part, would not be out of place in current conversation. I say for the most part, because there are a couple that strike wrong notes. One essentializes all “Chinese,” the other talks about the benefits of behavorialism and is perhaps too enthuastical about the problems that science could solve. Other than that, I liked all the essays. In fact, I liked them so much that it is hard to point out what was good. I normally read with a pen so I can take notes and engage with the text, but I couldn’t with this book. It just had narrative and argumentative momentum that I couldn’t dent. I instead dog-eared the pages where there was a striking turn of phrase of interesting way of looking at a subject that I hadn’t previously considered. By the end of the book, my wife remarked at just how many dog-ears were in the book. I can’t summarize it here and give it justices. You need to read Russell to appreciate him. I’m just a shadow on the cave wall.

Thursday, January 15, 2015

Median Versus Mean



In my experience, there is a big difference in when the median or the mean is the most useful tool to look at when you want to find the central tendency.  The question is what the likelihood of vast outliers is.
A good example is height. The human body can only grow so tall. That means that if you want to find out what an “average” person is in height, then the arithmetic mean is useful. You take a large enough sample, and then you can find with some large degree of certainty that the average height of the population is close the arithmetic mean of your sample. Most likely it will also be close to your median height, as the standard distribution was built on such measurements.
Conversely, some measurements are not bound. An example here is wealth. There is no biological, physical, or chemical reason that a person cannot have all the money in the world. There are people in the world who have so much money that most people in the world could not imagine it. There are then also people who make that first person feel poor. This long tail can lead to some misleading measurements. If you took the wealth of Bill Gates and three random people off the street, the mean wealth of your sample would be around 20 billion dollars. That may sound absurd, but some people have so much money that they can have a heavy weight on the mean of the whole population. That is why economists like to talk about median household income. Currently, if I recall correctly, it is around fifty thousand dollars. The mean household income is much more than that. We use the median here because it gives us a truer picture of the thing being measured.

Sunday, December 21, 2014

Why Central Banking was Late to the Party in the United States



          It took over a hundred years for central banking to establish itself in the United States because the antifederalist strain ran through the country for a long time. It still does, and our constitution still reflects it. It was even worse at the start, what with black people who were owned as property only counting as 3/5ths of a person. That whole thing lasted almost 100 years, and had to wait until the major colonial powers got rid of the institution where people were held as capital. The United States was in expansion mode even after reconstruction, and did not fully emerge as a global power until the time of the Spanish-American War. It is probably not a coincidence that the emergence of America’s power is correlated with getting on the central bank wagon. The Banks of England and France were long in existence, and our nation would swing towards federalism in monetary policy and then towards and antifederalist. The Second Bank was not renewed under Andrew Jackson. If there were a Tea Party in the nineteenth century, Jackson would be the person wearing tea bags stapled to a tricorner hat.
            The other thing holding back central banking when the United States was young is the business cycle. Sure, things would get bad and the socialists would agitate and the police would frame some anarchists for bombing a crowd, but then things would go on the upswing and whatever impetus for change would be forgotten. This exists now.  The Dodd-Frank legislation was a watered-down bill in the first place, but it was all that could pass our congress at the time to make sure the events of 2008 never happened again. Of course, it is less than a decade since the top of the housing market, and politicians are trying to undo what protections were put in place. Add to that the fact that Fannie Mae and Freddie Mac are now saying they will buy mortgages that have down payments with as little as three percent down. The problem is that as a culture, the problems are too soon forgotten.
            Change happened in 1913 because the crashes were deep enough and close enough together that even the moneyed interests were worried about the future. Morgan backed the banks in the Panic of 1907, and even then that lead to a widespread recession and some bank failures. What would happen if the next crash was even worse and there was no James Pierpont Morgan to step in and grant liquidity to the banks? That did happen in 1929. The Federal Reserve may have failed, but they did not have the data or the knowledge that the Federal Reserve had in 2008. Thankfully, the Fed as established uses its power for good, and keeps the economy stable. It is just a shame that so much human suffering had to happen before it felt comfortable using its tools. Pray that those tools are never taken from them, because the main role of the Fed is to keep the economy stable, and it has proven that it can do that within limits.