ChatGPT, DALL-E, Midjourney, Stable Diffusion—names that most of us hadn’t heard more than a couple of years ago now represent a slew of creative programs powered by artificial intelligence.
Large language model AI programs can write stories and articles, make illustrations and artwork, and converse with users using prompts. But what does it mean for human artists and writers? Will AI steal jobs and creative works? How should people approach the thorny ethical thicket around AI-generated art?
Mark Fagiano, a philosopher and instructor at Washington State University, talks with Larry Clark, editor of WSU’s magazine, about how ethics in action and pragmatism can help people examine not only AI art, but any rapidly evolving technology and issues in society.
Read about research into measuring AI’s capabilities in “When will artificial intelligence really pass the test?” (Washington State Magazine, Spring 2023)Support the show
Want more great WSU stories? Follow Washington State Magazine:
How do you like the magazine podcast? What WSU stories do you want to hear? Let us know.
ChatGPT, DALL-E, Midjourney, Stable Diffusion. Most of us hadn't heard these names more than a couple of years ago. But these creative programs powered by artificial intelligence have taken the world by storm. Large language model AI programs can write stories and articles, make illustrations and artwork, and converse with users using prompts. But what does it mean for human artists and writers? Will AI steal jobs and creative works? How should people approach the thorny ethical thicket around AI generated art?
Welcome to Viewscapes--stories from Washington State Magazine. Connecting you to Washington State University, the state and the world. Hi, I'm Larry Clark, editor of the magazine.
I met with philosopher Mark Fagiano at Washington State University about how ethics in action and pragmatism can help people examine issues not only in AI art, but any rapidly evolving technology in society.
So if you could just introduce yourself?
Sure. I'm Dr. Mark Fagiano. I work here as professor of ethics and philosophy and I focus on areas of pragmatism mostly in writing on empathy, race now and democracy.
That's great. And I contacted you because I've become really fascinated with AI. And in particular, AI generating art, writing articles. But that opens a whole can of worms when it comes to ethics and some ethical dilemmas. So I'm hoping to learn some from you, and maybe talk and converse a bit about what is AI? You know, is it actually generating things? And is AI really intelligent?
Yeah, that's very interesting. I think also, we might want to have to define what we mean by ethics. And so my loyalties, and there's different schools of thought, but my loyalties go back to the ancient Greeks. And the notion of ethos that signifies character ethical is that means related to a character. So one develops a character by repeatedly doing something, and by repeatedly doing some activity that leads to a certain sort of flourishing, or excellence of the action. And so for someone like Aristotle, you'd say, well, that seems like we all strive towards that more for itself, to be flourishing, to do well in life. And then, of course, there's different schools of ethics more following certain rules and principles, following reason are in line with reason.
And then more of the school I'm from, in addition to being enamored by the Greeks, is American pragmatism. So mostly, William, James and Dewey, in terms of my own thoughts. Now, I really think that sometimes ethics has to be done on the fly, to some extent. And that's kind of echoing what William James said, and I wish I was brilliant enough to say this, but you can't really have an ethics made up dogmatically as the key in advance of experience. So when more experience arises, then there's more reflection, and more of an ethical engagement with these transformations that are happening in society in our own lives and such like that. So those are my loyalties.
So it's ethics in action. It’s something that's developed.
Right. And the difficulty with that, I think that ethics is probably the biggest, because what I do, and maybe don't wanna make it sound more difficult than it is, but you know, life is complicated, and trying to figure out what it even means to flourish and do well. And then if I'm flourishing and doing well, and at the exclusion of another person being able to do well, is that ethical? So there's all these questions regarding ourselves and what it means to do well and flourish. And then there is the relations between ourselves and other people and community at large. So very difficult topics that come up.
And so I think one of the things that happens is people just try to create these rules, and then say, “Hey, this is ethics just follow the rules.” Well, you know, I don't like that so much, because it doesn't involve very much active participation and reflection upon the difficulty, you know, of making decisions with all these changes that are happening. What's the good, what's the right thing to do? You think that this is really resolved and making sure we're having discussions like this, and creating opportunities institutionally to have those types of discussion with the community at large.
Yeah, I and I think this really applies to this conversation about AI and what it's starting to do in the world of art, and creative expression, and writing. I mean, I don't even know if I should call it creative expression.
That's an interesting question. So, you know, a couple of things about AI, I guess maybe to start with, there is a flaw. There's a philosopher, John Searle, who came up with this example called the Chinese room argument. In the Chinese room argument, you can just imagine somebody's in like a box, sort of like this room here. And the person doesn't speak or understand any Chinese. But somebody from the outside who does speak Chinese is handing the person certain little characters, characters and numbers on them, with Chinese script on it. And then there's a book in the room for the person who doesn't understand Chinese and there's rules to say, hey, put these in a certain order. So the person hands the tiles, the person doesn't know what this person inside of that room doesn't know what’s said, but they follow the rulebook and then put them in a certain order. And then they hand it out to the person who speaks Chinese.
And so what happens is basically, there's a question asked by the person outside, and then there's an answer that the person constructs in the room, hands it back out. “Oh, that person gets it?” Well, no, of course they don't. They're just following the rule or algorithm or the pattern, right? To arrange it a certain way. And so like this is this is really important, because this is a long time ago, this was actually thought up by this philosopher and others.
It’s just basically saying, look, there's different types of AI. Like one of the ways we think of AI in that term is just called weak AI, like, okay, an artificial intelligent robot or computer could arrange certain things in a certain way, but it doesn't really know it, right. But then there's an idea called strong AI. And we see that like the scary movies that the AI is going to come and get us. [Clark: Skynet?] Yeah, “I'll be back.” [laughing]
And so the idea is, you know, okay, so what will happen now in artificial intelligence, that are these layers, right down to machine learning now, deep learning? So some people think like, in the area of deep learning, it might be the case where there is that strong AI. But I'm pretty skeptical about that at the moment. But, you know, we'll see. Besides all that, I mean, some of the things that can be done, like, for instance, diagnosing cancer early, more than any doctor, it's amazing. So if you have enough inputs in this, and in the deeper levels of AI, there can be amazing things done.
Question is, the big ethical questions, who is experiencing any sort of value from it and who's been excluded ethically, as far as the ability to flourish? Which probably brings us to the question of, you know, artists. I mean, here are people who dedicate their lives to become digital artists. And now this is arising. This has happened a lot of times historically, where a new technology, and it was early 1900s, you know, the steam engine, or whatever it might have been, comes around. “Oh, no, there goes all the jobs, how terrible this is.” Well, it's, you know, this is the sort of thing that happens. So we can't be too worried about the fact that there's going to be technologies that are and especially the next 10-20 years, a lot of jobs are going to be lost, a lot of people are going to have to shift their focus. I think if people want to shift their focus to be practical about having a job, right, then I would think like, they want to think about like, what can I do that would involve something in AI computer can't do, like, for instance, be creative in a certain way? Or to be skeptical? Or to have emotions?
You know, people talk about how well an AI computer can mimic any emotion, but really, I mean, no, they're not experienced in emotion. Emotions are evolved characteristics that we have so that we’re moved. So anyway, what is the thing computer can't do? I really feel sad for the folks that have spent their lives, you know, trying to create beautiful things for people. The problem is, is that most jobs like that, that are subservient to a capitalist society or some sort, you know, to be subservient to some sort of commercialization are probably going to be “made useless.” And I showing scare quotes here. “Made useless” in the sense of not being able to fulfill the needs in a capitalist society.
I think that's one of the things that I've heard and read quite a bit about, you know, these artists who feel like they are being impinged on by this commercial tool, which is essentially what it is. You know, anybody can get onto one of these art generation Ais like stable diffusion and put in some prompts. It'll create artwork, for lack of a better term. And so, you know, kind of going back to what you were talking about, you know, people flourishing, what is it, what actions are happening that will either help them or harm them. And they're claiming that they're getting harmed.
I'm sure they are. So one of the ideas is like, well, you can't have like an AI company say, “Okay, well, you've been harmed. So here's money.” I just don't think that's going to happen. They are being harmed. The question is, you know, what can one do about it. And I don't know if one can do much. I would say maybe try to focus on art that's more disruptive and can't be copied. And so that involves another thing that AI can't do, these higher imaginative explorations of the mind. And so like an AI can't have a mind's eye, you know, if you close your eyes, you can see like I can see outside and the rain. And I can imagine a Newton, Isaac Newton, walking across here. If I want to do those sorts of things can't aren't part of an AI machine. So what sort of ways in which that can lead to certain creativity and imagination?
So I think one of the big questions for us today is to think about what's different from humans and an AI machine, and there are a lot of things that are different. And so people really want to focus on the more pragmatic like “get a job that's not going to be overtaken by the computer.” I'd say focus in on those things.
I say sometimes as a joke, maybe philosophers will be the only ones left. Because they just doubt things. [laughing] That might be advantageous. But I mean, with every bad thing is clearly harm. But there's also going to be a lot of positive things like the energy someone feel “oh, wow, look, I just put my 10 pictures in” this one called Lensa. You put like 10 or 15 of selfies into the app, and it just gives you all these cool pictures of yourself. And so that brings a lot of joy. So that's not harmful. Right? But then, you know, it's an exchange here. So there's always, you know, it's different sorts of like this was harm, there's also ease of use, and there's also certain joys that happen. So that's part of the nature of reality.
Is there some other aspects around identifying what is an AI-written article, or an AI-generated artwork? You know, is there some compelling need for people to identify? Hey, you know, this was done by deep language learning model, as opposed to, you know, just putting it out there. Does it need to be credited to an AI? This came up recently, I forget which news outlet, but they said, “Oh, you know, you know, we've been generating these AI articles for a while.” And I know for a fact that, for example, sports recaps are often written by AI. Should they be identified that way?
Hmm. That's a big question. I think the lawyers will have to deal with that. But I do think that my own experience of chat in this ChatGPT is just the first, you know, really good fun. It's gonna get more intense. And we'll see what happens. And again, this is the whole point of ethics: here comes experience, it's rushing in now, let's reflect. I think at this school or any other schools, sometimes the first initial sense is like, “Okay, we don't trust the students, they're going to use this. And we're how do we make sure that we're grading them? Right? That they're not cheating?” It's like this: You can't try to censor things and have a hands-off position? Of course, you can. You can say you can't use this. And you can ban it from the school, which I think we have, I don't know, maybe we haven't. But I understand that. But then again, ethically the question is, how do you create things that can't be? The student can't use that.
I've used it for summaries of things. And what I've noticed is the problem is it's first of all, a lot of the first sentences are just throwaway sentences. So it's not very engaging, just in terms of writing, and a lot of times it's wrong, and I'm sure it'll get better and better. But the great thing about writing is that it's never just as an explanatory venture, where I'm just explaining what something is. And that's, I think, what AI can do well, question is like, what now is the significance of those facts? And where do we go from here? What do we do based on the facts and then there's the interpretive dynamic, which in AI is not subject to hermeneutics, hermeneutical differences, in the same way as a human could be, and then not influenced by same cultural biases and loyalties and things like that. So another thing addition to emotions and skepticism is interpretation of text and of meaning. So yeah, maybe philosophers and hermeneuticists will be standing as a job. That'd be terrible. And philosophers have to work when everybody else gets UBI, which might happen you know. I could see like, you know, unemployment going to 30-40%. [That’s universal basic income.] Yes.
So yeah, I think that's gonna happen. Not in every society, but I think it's going to happen. It's not going to be happen because the government just loves you so much and just wants you to be okay. It's going to be to prevent a lot of times when there's any sort of welfare or anything, whether it's free money given to students, and so they don't uprise. I mean, it's not going to be an uprising, you know, and so that's fine, you know, but it still will be a lot more limited, , as far as what you can do.
So I think, just in terms of the future, it's just, we don't know what's going to happen. But I think it's going to be the next 10 years will be something you never expect. I mean, it's going to be amazing, it's going to be frightening. But it's not going to be frightening in the way people think, with like the robots taking over. The robots are already taking over, being used as weapons in a certain way, you know. So humans are very much using AI, in maybe practical, but also somewhat malevolent ways, for their own purposes. And for certain sorts of warfare, which I think will get even worse. I mean, it's hard to say what the future of that is, but, you know, in terms of the whole idea of the narrative, like, will they take over? That's already happening certain way, and it's being used and directed by human beings for their own interests.
I think it's the people's response to AI has really been fascinating to me. There's panic, we were talking about, that AI is taking my job or destroying this particular field, say art, or writing, I think is really fascinating. And I read a piece in Wired, that gave me a very different look at it. It talked about photography and how, when photography came out, there were a lot of people who said, this is the end of art as we know it. There will be no more artists. Well, what was interesting is that there were more artists after photography. It became in demand. So I wonder if something similar is going to happen with AI. Sure, it's a disrupter, but you mentioned and I love this, this idea of doing some art that's more daring.
I don't want to go into like my theory of aesthetics. But this is something that has been discussed historically like, what is the nature of art? Are there certain types of art and there's any hard sort of hierarchy? In a democratic society, people say, well, the art is just the feeling of what it causes in you. But then there is a state of like, how, whose judging whether it's daring or not and disruptive. But, you know, I just can't help but think like in line with like, Adorno and a lot of critical theorists in terms of when it's subservient to a capitalist society, or its subservient to certain interests of some commercial project. It's already limited. As far as being disruptive, just like in an interview, like maybe if we had an interview, just you and me talking, versus if I'm on NBC, there are some things I couldn't say, there. You know because it's brought to you by Taco Bell. Well, in the same way, you know that's very similar to what I just said, namely, that, you know, when you are subject to the marketplace, there are limitations to the creative act.
So, like you, I think a lot of people think the sky is falling, they have the feeling the sky is falling. But I think you're right, in terms of you just don't know, and the photography example is perfect. I mean, I know for myself, when I never got into the room with a, what do you call the, the darkroom, I never got into that. But, you know, I really enjoyed when I got a phone and can like, take pictures of things I saw. Just people sharing a lot of beautiful things that they see, which is nice. And then also a little bit obsessive, in certain contexts. But yeah, it's hard to know where things are going to go.
The biggest concern I have is the unchecked use of AI for certain power, to give advantage to some people, and to forget so many others, right? So, I think like with things that are going to be advanced with, like different sorts of things that can go into the brain, I mean, who's gonna have access to that? What sort of laws are going to be created to protect the vulnerable, right? That's the big thing. I think we really have to worry about how will AI use some sort of neuro-link and things people have put into the brain or whatever.
I mean, already, just having access to an internet is way advantageous for people who have it compared to like, the global south. So that's the big thing that needs to be really checked. And if it doesn't happen, then you're gonna have a lot of bad consequences. And I'm, you know, a pragmatist. I'm a consequentialist. I'm worried about, okay, let's think about this, this doesn't really have moral value in and of itself. It produces things which produce harm or pleasure, as you said before, which is a great way of thinking about it ethically, I think, in terms of what harm not only is existing now, but what harm may be created, if we don't really think carefully about this.
Yeah, I can see some real danger of using AI as a tool of control, which is probably very tempting, especially in authoritarian governments and places where they gain advantage from controlling people.
Even today with you know, our online presence or the phones, you know, it's basically cookies tracking us. And then it's a type of intelligence in the sense of like, an ordering of trying to sell us more things based on like, so that might be very advantageous. I’ll tell a story. I went on, so weird when you know, I didn't make it up. I went on Google and want a Walt Whitman t-shirt. I wanted to get a Walt Whitman t-shirt. I never had one you know, to be cool teacher, hey, look a Walt Whitman t-shirt, you know. So I did that. And then a day later, on Facebook, on the side, it said, buy a Walt Whitman t-shirt. So I mean, like, yeah, you knew that was it, and no one's gonna freaking look up a t shirt. So you know, it was tracking. So these things are already being used.
And then there's the moral question, like, if you want to commit your life, to just doing that sort of thing, where you're following people and trying to sell them, that's your choice, but it's also manipulative. And people in our culture, you know, people praise the almighty dollar, but at what cost by manipulating people's choices? So I think that's going to become more of a problem, where AI or other forms of computer interfaces will be able to, again, be subservient to a marketplace. And there's no escaping it. I'm not a Marxist, although I can understand like, what Marx is saying in 1844 economic manuscripts and the problems he sees, which are manifested in different ways today. But at the same time, we need to be vigilant on digital privacy.
I think that's really my last point, where the machine learning in AI is just going to accelerate because one of the things that I understand, AI is very good at is making those connections. You know, pulling those threads of information and intelligence and creating ways to reach consumers. It's a form of control.
It's a form of control. And there's no way again, going back to James, to have an ethics made up dogmatically in advance of experience. We need to be vigilant as experiences coming in, as now we're having this discussion and reflecting upon these different developments. And so that's the key, like, I think too much that ethics, how people practice it is too much in line with compliance and not enough thinking about the emerging things that are happening, and reflection and vigilance towards those emerging things. Because the rules won't catch up by themselves. And not that you'd have to make necessarily new rules, but you have to have some reflection, and analysis of these emerging phenomena in relationship to harm and pleasure and these sorts of things.
All this does relate to just the future of education as well. Educate citizens in the way so that they can analyze and think about these developments and come together, you know, to make collective decisions for their community and for the nation. And that's very much involved. Like for certain aspects of AI tech, people stand up and say no to it, if they see the harm. That's the power of, that's a democratic ideal.
So, you know, I guess the answer is to say, Just wait and see. And just keep seeing how things come in, keep the lines of communication. It's all part of an open society, keeping lines of communication, inviting people, being inclusive to think about these things. Because the more people that know about the harms, the less chances that in the future it's going to cause those harms.
But like we said before, about with this whole situation with the artists, it's just I feel real badly because especially for an artist. You know, it's in a warm place in our soul and people who are artistic, you know, trying to create beauty, it's just really tragic. But I don't know what can be done about it.
Well, I think you outlined some ways of thinking about it, though, you know, in terms of ethics being something that you don't just have rules for. I don't know if you use the term legalistic but that's what jumps to my mind. That's as opposed to something that's practiced, and that we have to constantly work on.
And we make mistakes. You know that the certainty of the rule allows us to think like, well, you know, I'm not going to have blame for any experimental idea that I had where I make a mistake. So there's a security and there's a safety to the rules, and the rules beyond that. I'm not just saying it's a psychological phenomenon. They serve certain interests, and they do keep things together, especially in an institution. However, you know, at what cost if it's only that?
Is there anything else that, you know, people could think about? You know, as AI starts moving beyond just like writing and art and starts impacting our lives in different ways? Is there a practical way for them to use this ethics of action?
Yes, stay informed and engage if you see things emerging. That's part of being participant in society. And then also, back to what I said before about those things. I don't want everybody to go out there and say, “You go study AI and what it can't do.” But, “What can an AI machine never be able to do?” And, if you're thinking about charting your future or career, what's the sort of thing that a human can do that an AI machine can't do? And I think that the divine process of being skeptical about some things is one of them, and being emotionally moved, you know, when you see somebody in pain or have harm, right? Those are the sorts of things that…I don't know, who knows what's gonna happen. I could be totally wrong, we can look back out just five years ago. Gosh, now we have, but I just don't think it's possible, those sorts of things. And then also with interpretation, and different types of interpretation or different sorts of imagining, right? And just trust yourself to imagine things, you know, and to have that, that time alone, you know, to be in solitude and to imagine your future and your life and to create it from those activities.
Yeah, I think that's a big piece of what makes us human.
And I do want listeners to know that none of this was done by AI. [laughter]
Well, I gotta tell you, I do have a neural link in my brain. “This is brought to you not by AI, but it has been brought to you by Taco Bell.”
Well, great, thanks so much. Thank you so much. Appreciate it.
Thanks for listening to Viewscapes.
You can read about AI and how researchers are measuring its capabilities in the spring 2023 issue at magazine.wsu.edu.
Our music was composed by WSU Emeritus Professor Greg Yasinitsky.