A discussion of the approach to artificial intelligence tools at the mdw

The past academic year witnessed the establishment of the Senate Working Group on Artificial Intelligence, which includes representatives of all areas of the University and is the largest among the mdw Senate’s working groups. On what topics does it focus?
Dagmar Abfalter, professor of cultural management and head of the Department of Cultural Management and Gender Studies © Daniel Willinger

Dagmar Abfalter (DA): The working group was initially concerned with developing a position on AI as such, with a special focus on ChatGPT. There were lots of apprehensions as to the quality of our work and misgivings with regard to people’s individual contributions, and there was also the question of whether AI might cause the cultural technique of writing to go extinct.

Karl-Gerhard Straßl (KGS): For quite some time now, we at the Competence Center for Academic Integrity have been studying text-generating AI tools and issues such as how machine-translated passages of text should be handled. Should they have to be marked as such? The Senate working group is creating an awareness here and brings together highly divergent approaches. We’ve now begun turning our efforts toward further concrete considerations for our institution that are highly relevant to final exams, written submissions, examination modalities in our programmes of study, etc.

Walter Werzowa (WW): Unfortunately, lots of people have this incredible fear of AI—and fear never helps. What does help us is engaging with the topic and realising how AI is not “at fault”, seeing as it’s really about what the legal entities behind it do, how it’s dealt with by these corporations. By which I mean Microsoft, Google, Apple—the big players. We need to talk about that instead of painting AI itself as the bogeyman. People still harbour strong reservations about AI, but we’re actually capable of far more innovative thinking.

KGS: I’ll second that wholeheartedly. And it’s happily the case that we’re already one step farther. The question of whether we want it or not is long since passé. It’s now about the “how”, which is what we need to work on.

WW: People still harbour strong reservations concerning AI, but we’re actually capable of far more innovative thinking.

Christoph Stuhlpfarrer (CS): In quite a few respects. One mustn’t fear this or that musical subgenre perhaps being too un-academic. Because in actual fact, a university is precisely the place where we need to be experimenting and trying things out—which means new technologies, too—without drawing distinctions between good and bad.

Christoph Stuhlpfarrer, Digital Learning Coordinator © Daniel Willinger

DA: I agree with you on that. But I’d like to raise the question as to whether and how this openness that’s being called for would actually lead to that much more innovation. After all, what these algorithms are built to do is to reinforce the mainstream, to amplify what we already have lots of. And in that regard, there absolutely are some critical positions. We know that algorithms prefer white and above all male positions. They’re based on lots of data. What’s innovative, on the other hand, is what’s made by human beings.

This was one of the issues with which we dealt last year on International Human Rights Day, where we focused on the tension between AI, the arts, and human rights—and on an AI aesthetic that produces mainly images fed by the old patina of conservative pictorial worlds, reflecting these back into society.

DA: Yes, and there’s a second thematic emphases that comes into play here where intellectual property is concerned: the fact that the mainstream reflects back material that’s conservative and already present in great abundance makes it that much more difficult to monetise one’s own innovative ideas. This is on its way to becoming a huge problem for artists.

CS: It’s necessary to draw a distinction, here: the one thing is the innovation of AI per se, the fact that we now actually have such a tool in our hands, and the other is how it’s fed. The volumes and types of data influence the results. But there are lots of varieties of AI that could be more interesting to the mdw than those that process text—like algorithms for isolating musical parts or generating written music, which could ease a teacher’s everyday life. So there’s the question of how I use AI.

WW: That’s the point. AI is a tool. It’s not a search engine where you search for a solution and sell it, but something from which you can take inspiration.

KGS: I think that “inspiration” is a good word in this context. It implies, however, that you need to be able to deal with it. The skill of questioning the results one obtains from using AI has to be learned and taught, and we’re definitely still behind the ball on that count. In and of itself, AI is simply a new tool that builds on the existing mainstream. One should always devote some scrutiny to what an AI tool spits out. To say nothing of the attendant legal issues, of copyright and of who bears responsibility.

Walter Werzowa, professor of media composition at the Department of Composition Studies and Music Production © Daniel Willinger

WW: What do we understand mainstream to mean?

DA: Mainstream refers not to a value judgment, but to what’s broadly distributed in society or among certain groups. I used this term in order to point out how marginalized groups, those who aren’t mainstream or in the middle, face special challenges. It’s going to become more difficult to monetise artistic doings because lots of human work will then be taken over by AI. Sure, it’s an opportunity, and it will ease and democratise access to things for many, but the worlds of artistic work are also changing at the same time. And those who are at the cutting edge of this creativity may just end up being those who get put at an economic disadvantage.

WW: Isn’t mainstream more of a temporal problem? Van Gogh certainly wasn’t mainstream in his time, yet his artworks are now among the most expensive and most coveted worldwide.

DA: That’s a great example of what I’m saying, though, because Van Gogh himself didn’t benefit from it at all. And at the same time, this demonstrates the phenomenon that we call “crowding out”: nowadays, he now occupies this huge position on the market that robs young living artists of financial opportunities. Which is not to deny that Van Gogh was a great and inspiring artist and indeed does deserve this esteem posthumously, as well.

WW: All of us are influenced by great artworks from the past. I talk to my students about the importance of David Raksin and Bernard Herrmann. So what’s the difference between me telling them that and AI telling them that?

DA: Well, art and music markets have always been subject to various influences. And if you, Walter, give me a recommendation, I can take a look at your biography to find out what shaped you and formed the basis for your recommendation. But AI is a black box, behind which stand huge corporations with economic interests. That leads me to view human evaluations as significantly more valuable and more trustworthy.

KGS: Ergo, we humans place our minds and spirits above programmed machines. Which entails a need for standards that we can agree upon for there to be trust in what AI produces. And bound up with that is trust in universities and their production of knowledge.

CS: I ask myself: If a machine can do something at the press of a button, how valuable is it for graduates to still be capable of performing the same task? Shouldn’t we then be devoting ourselves to other things?

We’ll eventually get to the point where we’ll no longer be asking where we want to use AI, but—in part due to economic pressure—whether it’s still acceptable for human beings to do work that AI does faster and better. What should AI actually do, then, in your opinions and in your respective areas of work? Where can it be employed productively and in ways that make sense?

WW: I hope and do believe that AI will help us overcome a certain hesitance so that we can get back to being more creative. That might be in private and just for fun, or in work-related situations. Technology can benefit the creative process—just like the quill pen and the metronome helped Beethoven to compose.

KGS: AI cannot be an author. Legal responsibility is borne by the person who makes use of the texts and images produced by AI, and one needs to be cognisant of that. But even so, it does present a great opportunity to change something here in terms of written submissions by university students. It doesn’t go without saying that a final paper always has to be one that was written alone at home. Learning processes could come more strongly to the fore. And we, as an arts university, have still more options in this respect and should think about how to use AI in a sensible and well-scrutinised manner. The backdrop to this, however, consists in standards: transparency, honesty, etc. are the foundations.

Karl-Gerhard Straßl, jurist & head of the Department of Organisational Law and Professorial Appointments Management and Competence Center for Academic Integrity © Daniel Willinger

DA: Even using AI, I’d want to uphold the principles of rigor and relevance. My hope would be that AI will provide us with more latitude to clear out obstacles. To concentrate more on what makes us researchers: combining things and conceiving of them in new and different ways, involving societal themes more strongly. I’d also like it if AI were to eliminate the bothersome aspects of research for me—like maybe interview transcription or searching for literature. And for the students, I’d like it if AI were to remove some pressure from their everyday lives. If it were to give them more latitude to learn the important things, to develop personally and try things out.

What role can or should a university assume amidst these aspects of innovation, exploitation-related interests, quality criteria, social responsibility, and responsibility as an educational institution where AI is concerned?

WW: MIT has a course that students are only allowed to complete using AI-generated output. Doing so makes one realise how difficult that actually is. I think it would be great if we’d try out something similar at the mdw—because it would enable people to quickly realise AI’s limitations and maximise learning. Everyone should be able to take part in something like that.

KGS: I view our task as a major arts university as being to provide a safe space in which to experience the possibilities offered by AI so that our students and faculty can take away something from which they’ll benefit in the outside world, as well.

CS: I think that corporations and money are setting the course here and that universities will have little influence on it. But we can give rise to a certain level of awareness.

DA: I don’t see the role of universities changing, here, because the point is to enable students to try things out. So our role will continue to be that of encouraging creative and critical thinking.

Leave a Reply

Your email address will not be published. Required fields are marked *