You have /5 articles left.
Sign up for a free account or log in.

loops7/Getty Images

In a paper released last September by the Brookings Institution, author Alex Engler suggests that the use of algorithms to determine the amount of scholarship (i.e., discount) support students should receive does, in fact, hurt students. He claims that the “prevailing evidence” suggests that scholarship awards are lower when algorithms are used, because they “excel at identifying a student’s exact willingness to pay” to attend a particular institution. Engler goes on to state, in what we believe to be a fairy-land view, that “colleges should not use predicted likelihood to enroll in either the admissions process or in the awarding of need-based aid,” and that decisions should be based only on the candidate’s merit (as if this were an absolute thing and not relative to each institution’s needs).

Inside Higher Ed reported on this in late September 2021, doing a fine job presenting both sides of the algorithm debate. We wish to take this discussion one step further.

In a perfect world, where students apply to three selective colleges and are guaranteed admission to at least one, and where the competition for students at every level of selectivity was moderate rather than intense, these noble ideals would be hard to argue against. Moreover, if one has never had the responsibility of recruiting, enrolling and retaining the number of students that meets the institution’s target tuition revenue and delivers the desired quality, diversity and talent, it is easy to convince oneself that Engler’s view of the current system might be reasonable.

But let’s look at what really happens as admission and financial aid officers try to assemble a class that meets multiple institutional goals, including the revenue needed to continue to provide a sustainable and quality education. It’s just not as simple as “Algorithms—bad; interest- and need-blind admission—good.”

Engler does acknowledge that “Algorithms can play a responsible role in higher education enrollment management and are not inherently harmful.” He distinguishes between the use of algorithms to predict enrollment (which is fine because institutions need to plan for course enrollments and residence hall beds) and the “optimization” use, which he categorizes as troublesome. Here, the paper claims that optimization algorithms “may reduce average per-student scholarship support,” that they “optimize scholarships for yielding students rather than … support[ing] student graduation and success” and that “subgroups of applicants who appear … to be less affected by changes in scholarship funding may be mistreated.”

For institutions with strong reputations and significant demand, optimization is rarely necessary. They have the financial resources to select students without regard to their financial need and to meet full need when accepted. They also rarely award non-need, so-called merit scholarships, which are the primary target of optimization algorithms, despite the article’s assertion that they are used to determine need-based awards. For these highly visible and well-known universities, optimizing algorithms as Engler describes are rarely used. Of course, when the popular press and the prestige-conscious public talk about “college admissions” writ large, it is almost always these elite institutions that are highlighted (for names, just go to your favorite top 20 list).

Then we have institutions that are admitting more than two-thirds of their applicants—the public regionals and small, nonprofit private colleges and universities that make up the majority of the 2,300 four-year schools (excluding the for-profits) in the U.S. These schools, with baseline awards (discounts) for all, typically use the optimization tools to increase “merit” scholarships from that baseline to attract the students they most want. They may use algorithms to help determine additional financial incentives for individuals or, more likely, for groups of students to secure enrollments. Such institutions literally cannot afford to lose students, so they are not going to try to minimize aid, as evidenced by private college discount rates soaring into the 70 percent range. It is hard to argue that algorithms hurt students applying to schools in this very large group.

Finally, we have a group far greater in number than the elites, but not nearly as plentiful as the less selective institutions—those colleges admitting roughly between 25 and 50 percent of their applicants. At these colleges, need-based financial aid policies can vary widely. A good number meet full need, in which case, optimizing algorithms are of little value. If an institution gaps as a baseline (meaning it does not meet full need and that the grant portion of an aid package has a minimum value), optimizing algorithms provide more need-based scholarships above the standard gap—not less. The majority of these institutions also award non-need scholarships and may employ optimizing algorithms to determine the “merit” discounts required to shift the enrollment decision in their favor. Here, we are referring mainly to students whose parents can pay, but for whom a discount is likely necessary to entice the student to enroll. In these cases, while net tuition revenue per student may decrease because of non-need awards, total net revenue increases because optimization algorithm helps fuel enrollment increases. It is this total net revenue that helps the institution provide need-based scholarships to more students. This is not theoretical. This happens.

Our experience suggests that most institutions care about retention—after all, it is the most cost-effective way to maintain or enhance tuition revenue. Moreover, many rankings systems use retention and graduation rates, particularly among low-income students, in determining an institution’s ranking position. For all institutions, it makes little sense to admit a student who will ultimately drop out, either because they are not academically prepared or because they do not have the financial resources to complete. Whether a college uses optimization algorithms or not, this is simply bad practice.

Different institutions have different goals. Long before algorithms, when we were young admissions officers at two future Patriot League institutions, admissions and financial aid decisions were made based on what these colleges valued. It was also a simpler time in the 1970s and 1980s. The Common Application, which now has over 900 members, had fewer than 100, and students applied to many fewer colleges. There was not the intense competition there is today, with students applying to 10, 15 or 20 colleges rather than three or four. That increase has resulted in a so-called merit war that has not only escalated discounts but also has inflated prices and confounds traditional yield models.

As technology has evolved to provide more tools for parents and students to compare colleges, and as books, consultants and free webinars instruct parents in the art of scholarship negotiation, colleges also need tools to help them work smarter to achieve both their enrollment and their net revenue goals. Higher education may be in crisis mode when we look at escalating price tags, declining demographics, culture wars on campus, COVID-19 protocols and a host of other issues. However, the use of algorithms in admissions and financial aid does not fuel this. They were created and are used by enrollment leaders to better manage the intersection of institutional priorities and the external pressures that characterize modern-day college admissions, and to help make it financially possible for students to attend their institutions.

Next Story

Found In

More from Views