Review of “Chronicle’s” AI Guide

The Chronicle of Higher Education (CHE) recently published a brief, useful, and competent artificial intelligence (AI) guide for university administrators, “Adapting to AI: How to understand, prepare for, and innovate in a changing landscape.” The author is Taylor Swaak, a reporter for CHE. Swaak has done her homework for this Guide, living up to her focus on how institutions are responding to technological innovation.

As Swaak has noted, using AI responsibly is cloaked in many caveats. I want to highlight several of these caveats. They are part of the necessary conversation for universities regarding this new technology.

 

AI isn’t new.

The Guide addresses and captures the sentiment of administrators. The issues in the Guide are not about the potential for classroom cheating, nor the effect on curriculum and learning. Administrators are now aware of its compelling presence far beyond computer science buildings. Important questions about resources, staff skills, following policies, data privacy cut across the academy.

The public release of generative AI tools, such as OpenAI’s ChatGPT and Google’s Bard/Gemini, has forced the university environment to play catch up.

But how should administrators, or anyone for that matter, talk about AI and how large language models function. At one level, administrators are advised to be willing to ask “stupid questions.”

However, a more interesting question that Swaak notes is the impulse to anthropomor­phize AI—assigning human actions like “hallucinating,” some experts note, doesn’t help—the tools are not “thinking” when we use them. They are predicting the most likely answer—predicting what output the user wants—by analyzing their underlying model’s training data, finding patterns, and calculating the probability that those patterns are correct.

They are, essentially, “mathing,” says Lance Eaton, the director of faculty development and innovation at College Unbound. (It should be noted that much of this “mathing” is a black box. Even devel­opers aren’t entirely sure how or why the models they’ve trained generate the out­puts they do, though research is emerging—see page nine).

Here, I dissent. While administrators should know that AI tools do not “think” but “math,” the advice would be better ignored in communicating with others. Human language is rife with metaphor and other semantic workarounds that do not capture the precision of mathematics and algorithms. How would an administrator “math” a thought about the content or the functionality of a large language model (LLM)? Wouldn’t it be better to point out what is humanly obvious rather than to seek out a technically correct analysis? That seems pretentious and likely to lead to feelings of administrative guilt.

Ultimately, we are talking about ourselves. ChatGPT is not biased; we are. The technologists who built ChatGPT with layers, loops, features, and the like with algorithms have built in their biases—explicit and implicit. That’s to be expected. The issue then becomes what we—the designers, the companies, the countries, and the universities—want these LLMs and their cousins in image-making, medical diagnostics, military applications, and the like to accomplish. How should these models function? And are will we be transparent with the end goals to which this technology is designed?

This is not an irrelevant question since even the Guide is drawn into this bias issue.

 

AI in admissions.

One of the illustrations in the Guide—Spotlight, Commentary, Takeaway, Road Test—reflects diversity. Andy Hannah discusses how university admissions policies might incorporate AI, noting that, if carefully scrutinized, AI can become an integral admissions team member.

As colleges navigate the shifting landscape of higher education, the appeal of employing artifcial intelligence in admissions has grown signifcantly. Institutions are grappling with multiple pressing challenges, including how to maintain diverse enrollments in a post-afrmative-action world; ensure equitable access; manage escalating administrative burdens caused by more applicants; and prepare for the unavoidable demographic clif.

For anyone attuned to the tangled history of affirmative action in university admissions, the creation of “diversity, equity, and inclusion” (DEI) administrative units, and the quest for equity will wonder what Hannah has in mind for guiding how a figurative AdmissionsGPT will decide who gets admitted to future classes.

Hannah recognizes the strengths and weaknesses of deploying AI in college admissions, but his design for an imagined AdmissionsGPT seems to land on protecting the status quo ante.

That journey demands rigorous scrutiny of the data that feeds AI, a commitment to reimagining admissions processes, and a concerted effort to dismantle the systemic barriers that inhibit equity. In doing so, colleges can build a more inclusive and equitable future in higher education, where technology serves as a bridge to opportunity.

I may be misreading Hannah, in which case I apologize. However, one reads AI’s tea leaves and algorithms, it would be useful to have competent legal oversight in designing an AdmissionsGPT.

 

No one, after all, is going to wrangle AI alone.

Swaak ends on a thoughtful note—as thoughtful as one can be while riding a rocket ship into a dimly understood academy of the future:

Administrators at colleges are witnessing the rapid development of powerful yet imperfect AI technologies that are shaking up the knowledge economy and forcing higher education — across all of its functions — to rethink what skills it values. It’s no wonder many feel out of their depth and unable to respond efectively and strategically. Responding to AI is, indeed, a multilayered undertaking. There’s the personal aspect of it: What do I, as an administrator, need to learn and understand? There’s the community aspect: What might it look like to put in place guidelines and policies, and roll out AI-literacy programs — all while preserving faculty autonomy? There’s the external aspect: How might we work on AI innovation with others, be they other institutions or industry partners? But experts implore administrators not to shy away from the work. There is too much on the line, they say — tools that can help level the academic playing field, tackle administrative drudgery, and better position students for workplace success — to stand by.

Students? The Guide’s point of departure, as noted, are administrators. Presumably their tasks foster a better learning environment for students. However, it is worth leaving the administrator view and take a closer look at the faculty-student view. This shift in perspective underscores the problem of integrating AI into the academe that may be far more challenging than even Swaak describes in the Guide.

Professor Megan Fritts assigned an introductory essay on her philosophy class. Many students saw it as busywork and turned to ChatGPT to write the essay for them. Perhaps they viewed the task much like the familiar elementary school essay, “What I Did on My Summer Vacation,” as trivial. Or, perhaps with today’s advanced technology, this once-standard assignment has evolved in a different direction than it did decades ago. Regardless, this is what Fritts had to say:

While a common defense permeating Fritts’ replies likened ChatGPT for writing to a calculator for math problems, she said that viewing LLMs as just another problem-solving tool is a ‘mistaken’ comparison, especially in the context of humanities.

Calculators reduce the time needed to solve mechanical operations that students are already taught to produce a singular correct solution. But Fritts said that the aim of humanities education is not to create a product but to ‘shape people’ by ‘giving them the ability to think about things that they wouldn’t naturally be prompted to think about.’

‘The goal is to create liberated minds — liberated people — and offloading the thinking onto a machine, by definition, doesn’t achieve that,’ she said.

The way AI is positioned in the university can be radically different. That is not surprising, given that administrators, faculty, and students engage with the university in very different contexts and with very different objectives.

I decided to ask ChatGPT for its understanding: How should AI be designed for a university? It was a detailed and comprehensive reply. I recommend that you try this prompt. Like I discovered, it will likely overlook what Fritts concluded:  “The goal is to create liberated minds — liberated people — and offloading the thinking onto a machine, by definition, doesn’t achieve that.”

Maybe a follow up prompt is required.


Image by Supatman — Adobe Stock — Asset ID#: 574643266

Author

One thought on “Review of “Chronicle’s” AI Guide”

  1. My larger point about “mathing” instead of thinking is that because it comes to its answers differently, that’s important to understand in what it creates in terms of responses. Humanizing it–whether we call it “thinking” or “hallucinating”–gives over human agency and changes our understanding and interpretation of the output. If we see the output as an unaware calculation, we are likely to be more skeptical and critical of its outputs, which is what we want. Whatever we want from the AI tools, we actually also need to understand what its deliverables actually are to know if our wants are met.

Leave a Reply

Your email address will not be published. Required fields are marked *