Now, apart from the dangerous oversimplification, Effective Altruism has applied QALYs for providing evidence for their career/policy recommendations, and they have expanded this mindset to a slew of other measures, which have become their own toolkit for evidence-based decision making. We mentioned in last week’s article that the oversimplification done was thoughtless for not considering the cultural aspects of the decision at hand and waiting for the statistics to work out in the long run. 

However, that is just one case and measure used to guide budding doctors in their decision making. Other measures commonly used by EA advisory organizations like 80,000 Hours and GiveWell are Disability-Adjusted Life Years (DALYs) and Cost-Effective Analysis. DALYs are just QALYs but focuses on the number of years lost due to ill-health, disability, or early death thus presenting the same misgivings as QALYs we mentioned last week as it also lacks the cultural aspects on the decision being made and neither have a weight for happiness.

EA also embraces cost-effective analysis to maximize positive impact, but navigating this approach comes with its share of challenges. Unlike QALYs and DALYs, these cost-effectiveness analysis estimates of altruism may be misleading in a different way. Take one study about caged chickens lamenting the exclusion of externalities such as the external costs generated by a charity’s interventions, adjustments to past and future costs for inflation, inclusion of past fixed or sunk costs, overhead costs, counterfactual costs, and costs of project initiation (e.g., term of reference stage). The point is that that is a lot of costs to consider for a study that can be found on the online EA forum tackling the cost-effectiveness of policies set by corporate campaigns on cage-free and broiler welfare commitments. 

For the uninitiated, that’s statistical overfitting for a chicken’s welfare and spoiler alert this study found that over 240 million cage free hens will benefit from a corporate campaign, whew. That in itself presents another issue with most EA studies/programs, that the objectives are either narrowly defined or too broad, making it very hard to take most of the EA studies out there seriously especially lacking robust mathematically focused peer review.

That was just a study of chicken welfare, imagine the stats and maths they’ll put together for something as important as AI to bolster EA’s often misleading ideals with more evidence for our decision making. 

Oh wait, no need. This just happened last week. 

One of the leading theories of OpenAI’s mini implosion, apart from the previous Board’s accusations of Sam Altman’s lack of candidness with respect to the business arm of OpenAI and a mysterious letter referring to a Project Q* supposedly written by OpenAI employees, is the disagreements between Sam Altman and Helen Toner.

As an introduction, Helen Toner is a 2nd generation OpenAI Board Member that has excelled in the field of EA and its applications through various disciplines. We point out that she is 2nd generation because the board member she replaced is Holden Karnofsky, a prominent name in both EA and AI. 

Despite her prominence, the EA community didn’t discuss Toner much, and for good reason. The idea that bickering between the two resulted in a firing is absurd. However, looking into second order details makes a less altruistic motive more probable, less altruistic in the sense that a director suddenly initiating a poison pill defense strategy means that that director is protecting something. 

This is worth considering since Holden Karnofsky had to leave OpenAI’s board due to conflicts of interest arising from his wife becoming the President of Anthropic, OpenAI’s largest competitor in many ways, and Helen Toner replaced him with her most notable accomplishments coming from her work on Open Philanthropy. 

Open Philanthropy is a well-known research and grantmaking foundation co-founded by Holden Karnofsky. Nothing much to say there but related party interactions like that are a subject of scrutiny in other industry’s/sector’s corporate governance matters. For completeness’ sake, the other two board members that voted for the ousting of Sam Altman are Ilya Sutskever, their Chief Scientist who switched sides so many times through this that it appears he knew less about where  OpenAI was going forward as an organization than he thought. Tasha McCauley has had experience working with Rand Corporation (known best for their stellar Cold War and Nuclear conflict contributions) as an adjunct senior management scientist, former CEO of GeoSim Systems, and she is notably Joseph Gordon-Levitt’s wife.

Luckily enough, the dust may have settled already on the reshaping of OpenAI’s future, and it has a new board moving forward. Helen Toner, Ilya Sutskever, and Tasha McCauley were replaced with Bret Taylor and Larry Summers, big names in both the public and private sectors.

This doesn’t necessarily mean that OpenAI’s management issues have dissipated, they will continue to come across issues not just of profit and ethics just like last week. Hopefully this time around it won’t be marred by philosophies and other probable ulterior motives and should just be driven by pure profit. A profit maximizing function defined by a board member collectively made of the right group of people with the right experiences can do the most good in this situation, or at least more than the alternatives offered so far.