Joseph Aoun advocates for revising higher education to adapt to artificial intelligence (AI) challenges. I have also advocated for this revision. Aoun presents a rationale and a buffet of possibilities. Here, I will extract a core recommendation to explore how combining disciplines with AI might work.
Aoun writes:
Since AI has extended its tendrils into nearly all facets of life, an education in it must be similarly comprehensive, providing a lingua franca that learners can apply across all of AI’s manifestations. This should be panoramic, offering not just an understanding of how AI is affecting our economy, but also our institutions, and our future as a species. At the same time, it also should be personal so students learn to recognize AI’s fingerprints in their daily lives: in their personal finances and health care, in their homes and transportation, in their social-media feeds, and in the apps that recommend what to buy, whom to date, and what to believe. This core education can be coupled with innovations at the interdisciplinary level. For example, colleges should incorporate instruction into how AI is transforming each subject of study so students’ learning is kept up-to-date in this fast-moving milieu. One way to do it is through combined majors that weave together disciplines with the thread of AI — for example, bridging computer science and theater. This develops a depth of knowledge while simultaneously exploring how technology may be changing subjects of study, challenging accepted shibboleths, or creating new opportunities.
Jumping off from Aoun’s recommendation, I view the visual arts as a useful point of departure for embarking on this quest. More specifically, the artistic process can integrate contemporary issues found in many applied courses. Leaning into the visual arts acknowledges how images can benefit student learning; these images may also incorporate their learning if the student is tasked with creating an image that symbolizes the core elements of the interdisciplinary courses.
The new element in interdisciplinary courses would be using AI skillsets found in image generators and language models. As puzzling as it may seem, it is less important to know what AI is than to be able to use these skill sets. A necessary caveat to this recommendation is the textual and visual hallucinations that occur with AI models, which, in turn, may result in compounded errors. There are also limits to what AI can add to creativity. However, AI technology is a technological golem speeding towards a universal presence. While the hallucinatory fear and AI limitations may be overstated, we know we are living through an ill-defined social, cultural, and technological transformation.
Students will be using AI. So, shouldn’t their instructors be a step or two ahead of them?
My suggestion has the semblance of being built on fashionable offerings when viewed from the typical approach to approving a new curriculum. Maybe even reckless. But how different is it from the quick and fearful response of universities suspending in-person classes and going online during the COVID-19 pandemic? The major differences are that the pandemic has subsided, and more data about COVID-19 is available to make it less threatening. At the same time, AI technology marches on and is likely to be permanent.
Here is a more relevant consideration based on an instructor-student interaction I learned of.
An instructor asked the class to submit a review of a movie they had watched in class. Two reviews seemed too well written, especially for one student whose earlier work was C-level. The instructor thought AI might have generated the essays. The instructor asked each student to discuss their work at his office. The first student declared that she was working diligently on the essay and that no AI was used to write it. The instructor accepted the student’s explanation. The same occurred with the second student.
However, during the next class, the second student stood up and alleged that the professor was racist. After all, the student was a minority, and the expectation was that the student couldn’t write a good essay. The instructor explained his concern about AI to the class and expected his explanation would suffice. However, the dean put a note in the instructor’s file without consulting the instructor about the student’s allegation of racism.
Even without knowing how common such examples are, few can doubt that the nature and extent of AI present such pedagogical problems. And not just concerning race. The use of AI can easily cloud the university enterprise.
The alternative is simple.
Instead of evading the presence of AI, as Aoun advocates, incorporate it into the educational enterprise—but incorporate AI robustly and transparently.
Exploring Hypothetical Curriculum at University X
Imagine I’m the president of a university. I require all students to take a pre-semester enrichment class or undertake a project that spans at least three disciplines. This experiential class or project involves engaging with a chatbot and using an AI image generator to visualize their intellectual argument. The goal is for students to use AI not passively but to foster surprise and insight in intellectual discovery, ensuring that the final writing and image are their creations.
Consider a model from first-year law classes: a student who reads the final exam and writes an answer might get a C or D. Instead, they are expected to spot issues, present majority and minority opinions, and address each issue methodically, demonstrating the process of legal reasoning based on a specific fact pattern. Similarly, students working with AI must engage in a give-and-take with the chatbot. Relying on the first response might earn a C or D, reflecting intellectual laziness or uncritical acceptance of ideology. A better grade requires iterative engagement with both the chatbot and image generators, mirroring the legal reasoning process.
Topics for interdisciplinary projects abound. The interdisciplinary courses could consider the pragmatics of climate change, reproductive issues and fetal personhood, sex-gender identity, grievance versus merit, and the multipronged issues embedded in societal law and order. I have written about these issues and feel confident exploring them with students. From my perspective, faculty would profit from tempering disciplinary jargon in favor of focusing on a comprehensive empirical ground.
Let us look at the debate about how law and order have been applied in the United States. There are clashing policies predicated on different facts, different weightings of these facts, different time scales, and bad facts—illusions. We encounter contradictory truths that guide policies such as defunding the police versus demands for more police presence; excessive force in policing versus decisions not to police; mass incarceration versus the consequences of widespread prisoner release; and racial bias in policing versus differential practices scaled to incident dynamics.
Students can be whiplashed by these competing “facts” until they begin sorting through them after listening to arguments from opposing understandings. The purpose here is not to sketch out the thrusts and parries of the respective positions but to follow the thread initially set out: How would students illuminate their understanding of one facet of this issue complex with visual art?
Learning how to use a chatbot as a study partner, such as in issues related to law and order, can be very beneficial. An interdisciplinary approach might draw on disciplines such as sociology, ethnic studies, anthropology, political science, criminal justice, economics, math, statistics, philosophy, education, and so on.
And then, there is the story that can be pulled out of the data. The story can be told visually as a work of art, much like Pablo Picasso did with Guernica or Francisco Goya with Third of May. The teachable moment with telling a story, a narrative, can be compromised and lead to distorting history. The student should also be challenged with avoiding a preemptive ideological framing. Given that AI image generators generally build in guidelines and often contain cultural biases, the path to creating a symbolic image is just as fraught as those the students experience with chatbots.
I decided to create an image that could be interpreted in opposite ways. I wanted to craft a triptych that would emulate a judge’s decision in the case of a young person convicted. The community could ask for restorative justice or perhaps embrace the result of the Innocence Project. However, the same community could instead be depressed that the convicted criminal is allowed back into the community to commit more crime. Both readings are possible in my final image.
In one sense, the triptych allows for several perspectives. It is not quite a Rashomon-type ambiguity that speaks to different ways to interpret what happened. Still, it is sufficient to allow the viewer to grapple with the ambiguity of the outcome. The image generators I used, however, could not produce triptych images. Here, I entered into a separate learning curve. I was required to learn how to create a workaround to create my triptych. At first, I played with separate images that would occupy each triptych panel. Images of a stern judge were prolific but leaned into caricature. I asked the image of a convicted criminal to show arms raised but with hands pointed in opposite directions. That appeared to be a difficult ask. Instead, the face of the convicted individual was paired with hands that seemed to fit more with the image of a judge. I decided to accept that reading of the image even though that was not my first thought. When I asked for a diverse community celebrating a restorative justice outcome, I was surprised at how joyful their march appeared; I decided to change the text to see what a depressed community might look like.
With all the variations I got from Open AI’s DALLE●2, I needed to step back and consider how these images could fit together. I decided to focus on the community image that I asked to be put into the style of magical realism. I was thinking of the tradition of the Mexican muralists—Diego Rivera, José Clemente Orozco and David Alfaro Siqueiros. Turning to Photoshop, I used the generative fill tool that allowed me to easily expand the content in the community panel and incrementally add the panels of the convicted person and the stern judge. Still, the triptych lacked a three-dimensional perspective, requiring me to use additional tools in Photoshop to realize the final image.
My recounting of the image creation process is intended to reflect what a student might be expected to experience while working with an AI image generator.
Admittedly, I have focused more on using generative AI in the visual arts aspect of an interdisciplinary approach to coursework. In my view, that may be the more challenging part of using AI models compared to language models.
Figure 1. DALLE●2 AI image model. A joyful community (left); A depressed community (right)
Figure 2. Triptych / Three Perspectives on Social Justice
In sum, Aoun’s—and my own—recommendation to implement AI tools with interdisciplinary coursework is not simply adding additional reading materials. There are important challenges; they are worth engaging in. This is one way higher education can evolve if it is to integrate AI into a university education meaningfully.
Image by Joe Nalven
Dear mr Nalven,
As GM for IPN Global (www.ipnglobal.com) I am involved in the organisation of our next upcoming meeting in San Diego. Our master theme is AI in action. Looking for an interesting external perspective by people around the San Diego area I came across some articles from you on AI. So I took the liberty to reach out to you to see if there might be a possible opportunity around our meeting (19-22 October) to do something of interest for our group (40-50 entrepreneurs / business owners in the graphic art, print, fulfilment and (digital) communications industry).