Back in 2006, Michigan voters put an end to race-based affirmative action through a ballot initiative. After eight years of back and forth, the U.S. Supreme Court has finally weighed in, granting Michigan, and the other states that qualify, limit, or outright forbid race-based affirmative action (Arizona, California, Connecticut, Nebraska, New Hampshire, Oklahoma, and Washington) the power to make this decision on an individual basis.

Only Justices Sotomayor and Ginsburg dissented. Justice Kagan recused herself due to her involvement in a case from her previous job similar to this one.

Now, depending on your perspective, affirmative action is either (a) a necessary means to right the unpleasant-but-true opportunity differences between people of different races or (b) a well-intentioned program that cause more harm, divisiveness, and resentment than it fixes. The opinion of the Justices were split along similar lines.

Whatever your opinion, it always helps to have some broader historical perspective to understand the “why” and “how” of the program and not just the “what.”

The idea originally appeared in the Wagner Act of 1935, a Congressional law that asserted numerous basic labor laws, including the right to unionize and protection against discrimination. This didn’t have anything to do with race, just workers’ rights. Throughout the 30s and early 40s, the Roosevelt administration put forth a number of other bills that explicitly forbade discriminatory practices in hiring for public works projects and any other tax-funded employment. The next few presidents followed suit, with Truman, Kennedy, and Johnson all pushing hard for government projects to undo workplace injustices on various minority groups. (I left out Eisenhower, since he generally believed this was more of a state-by-state issue.)

Kennedy’s administration first coined the term “affirmative action,” and it is here the program first adopts the philosophy of “actively” undoing previous wrongs, rather than simply removing previous barriers. This remained more of an idea than a practice until the signing of the Civil Rights Act of 1964. Under Title VII of the act, companies are explicitly forbidden from discriminating based on race, gender, creed, country of birth, etc. Even at the time, there was some trepidation that this would be translated as hiring “quotas,” but even politicians of that era insisted that was not the case.

Nixon and Ford continued to hash out some more of the specifics, but by and large, affirmative action wasn’t making headlines again until the 2000s, when critics of the program start becoming more vocal. (Though you could argue that 1990’s Americans with Disabilites Act was a continuation of the original idea.)

This is where my chronology gets a little fuzzy. At some point the national conversation stopped being primarily about the workplace and started being primarily about college acceptance. Why?

My personal theory is that college admissions, which more or less boil down to a massive-scale blind job application, make for a better case study of the complications that arise from the affirmative action. Most jobs have both a more personal acceptance process and a more specific set of requirements. It’s easier for colleges to creep into the dread “quota” territory, intentional or no, when you’re judging a student on little more than a report card and an SAT score. Additionally, colleges have a much greater incentive to promote a certain level of diversity on campus. For most of us, interacting with people from diverse cultural and socioeconomic backgrounds is part of college. While high schools tend to be built around small and similar communities, colleges expand opportunities by building new communities.

In spite of the Supreme Court’s recent decision, I don’t see this topic going away any time soon. It’ll be many more decades of debate before there’s anything approaching a national consensus. So what do you present-day college students think? What do you see as the future of affirmative action?