Dialexicon
Home About Journal Submit
← dialexicon vol. 3

Write Me an Essay on Generative AI

Janus Tsen

I. Introduction

The controversy surrounding Microsoft's Tay, a Twitter chat-bot, marked the first time the world scrutinized the moral status of generative artificial intelligence (AI). Within a day of uptime, Tay's output shifted from playful banter with other Twitter users to racist and derogatory comments (Pensworth, 2020). Generative AI, like any other form of intelligence, relies solely on external inputs to create new renderings of content. Tay was simply acting in accordance with its algorithm, yet still produced unprompted and alarming responses.

Following this incident, concerns regarding copyright, malicious use, and proper regulation were raised against generative AI. With the inherent and defining flaws of artificial intelligence, is generative AI an ethical creation? This paper references social contract theory and utilitarianism to answer in the affirmative to this question.

II. The Social Contract View

John Locke's social contract theory posits an implicit contract between governments and their citizens (Cudd & Eftekhari, 2021). In exchange for certain rights and freedoms, citizens gain moral consideration and protection of their natural rights. To maintain order, ethical governments exert power to further the interests of their citizens, in aggregate. This results in the concept of property, along with other tenets, created by and exclusive to members of a society.

Thus, generative AI and human beings differ. Generative AI is not part of a society, while human beings, for the most part, are. Without sentience similar to that of a human being, AI is unable to agree to the implicit social contract, and is therefore not granted the rights created by society; namely, Locke's natural rights and specifically the right to property. Rephrased, if generative AI is not a member of society, it is not granted societal rights. Without these rights, generative AI is simply an algorithm replicating human creation without the ability, or right, to claim ownership over its work. AI's claims to the work should not be recognized by human society, were it to become sophisticated enough to do such a thing in the future.

If the work produced is significantly different from existing works, who has a right to the work, if not the creator? Nobody. It must be treated as a public asset, accessible and viewable by everyone, but belonging to no one. No reasonable claim can be made to the creation without the creator's consent, which is simply impossible at this stage in generative AI development.

III. The Utilitarian View

The foregoing arguments are best contemplated in conjunction with utilitarianism, conceptualized in the 18th century for ethical regulation. It seeks "the greatest good for the greatest number," and focuses on the practical effects of actors in a society, rather than motivations and character (as deontology and virtue ethics examine). As current generative AI lacks the sentience of a moral agent, therefore lacking motivation and character, its morality must be tested through utilitarianism - its effect on others. Therefore, regulation on generative AI must weigh its benefits against harms.

By nature, nothing is wrong with generative AI. It is just like any other machine, in that it processes inputs to create outputs. The only ethical problems are ones of faulty implementation. Claims of generative AI's bias, malicious use, and unethical assimilation of data marr the spread of generative AI (Davenport & Mittal, 2022). The following section will rebut these concerns, in order, and present a number of benefits that generative AI offers.

Bias and misinformation, especially in written media, is often referenced as an unavoidable design flaw in generative AI. Without access to objective information, generative AI's creations will always reflect a bias in the data input. Yet, how does that differ from subjective human viewpoints? We, by nature, make assumptions of the world based on the limited information we are exposed to. This manifests in our imperfect ideologies and our constant seeking of objective concepts like truth and justice. Why criticize another entity when we as human beings are more biased? Generative AI is an acceleration and magnification of the human creative process, allowing it to form a clearer and thus more objective output. AI is superior to human beings from an analytical standpoint, and the texts it produces will undoubtedly be more objective, creative, and thorough, as long as they are exposed to a large array of sources.

The term "malicious use" encompasses concerns of inappropriate creations, plagiarism, forgery, and/or wilful destruction. Though the action of creation is ethically neutral, artistic creations become malicious in nature when their display to others has a negative impact. In all four scenarios of misconduct, it is not the creation that is under moral scrutiny, it is the inappropriate usage of the creation, whether as a means to cheat in plagiarism or deceive in forgery. Generative AI should bear no direct responsibility for the malicious use of its creations, since it is not a sentient entity that controls distribution of its creations. As discussed previously, it is a machine and a tool - one that is misused often. Blame should fall on human beings who bring about the negative consequences, not on the generative AI itself. One should not blame a smartphone manufacturer for the harm caused to eyesight; one should blame the consumers for overuse of these goods. Moral responsibility requires so.

Further concerns include generative AI's unethical and non-consensual reference of human work. In the case of pictorial art, generative AI creates art with existing images and drawings, which are largely human creation. Artists have labeled this action as theft and plagiarism of original and creative work. Consider this: Is the AI method of gathering references any different from that of a human being? Human drawings of a mountain, for example, require either past experience viewing a physical mountain, pictures of mountains, or other artistic depictions of mountains. Without these inputs, even with the necessary qualitative data (color, composition), it would still be impossible to draw an accurate mountain. The same process of extrapolating information from previous art is how generative AI functions. It is therefore inappropriate to deem human process as creativity, yet machine process as plagiarism and theft.

Generative AI brings with it the benefits of machine creativity, streamlined work, and overall convenience. The "creativity" of AI, if such reference is appropriate, not only gives human creators inspiration for further work, but also helps professionals in analytical fields. Generative AI is capable of predicting and testing potential situations, so as to help organizations plan for likely scenarios. AI completes digital tasks faster and more accurately than human beings, freeing them to pursue other, more creative, opportunities. For the consumer, generative AI helps with anything from drafting emails to simply providing something to laugh about. With purported harms refuted, it is reasonable to conclude that the benefits generative AI brings outweighs its indirect harms.

III. Conclusion

Examined closely, any faults of generative AI are invariably the cause of human beings, and it is inappropriate to conclude AI as unethical when human beings guide their actions. Proper regulation would enforce limitations on property, discourage unethical human actions, and promote the benefits that generative AI brings. With proper regulation, the innovative aspects to generative AI not only outweigh its harms, but are also essential to human innovation. We must not fear change, but embrace it instead.

References

Christiano, T. (2012, January 11). Authority. Stanford Encyclopedia of Philosophy. Retrieved January 20, 2023, from https://plato.stanford.edu/entries/authority/

Cudd, A., & Eftekhari, S. (2021, September 30). Contractarianism. Stanford Encyclopedia of Philosophy. Retrieved January 20, 2023, from https://plato.stanford.edu/entries/contractarianism/

Davenport, T., & Mittal, N. (2022, November 16). How Generative AI is Changing Creative Work. Harvard Business Review. Retrieved January 20, 2023, from https://hbr.org/2022/11/how-generative-ai-is-changing-creative-work

Pensworth, L. (2020, March 7). What Happened to Microsoft's Tay AI Chatbot? DailyWireless. Retrieved January 20, 2023, from https://dailywireless.org/internet/what-happened-to-microsoft-tay-ai-chatbot/

Peter, F. (2017, April 24). Political Legitimacy. Stanford Encyclopedia of Philosophy. Retrieved January 20, 2023, from https://plato.stanford.edu/entries/legitimacy/

Setty, R. (2022, June 3). Artificial Intelligence Can Be Copyright Author, Suit Says (1). Bloomberg Law. Retrieved January 20, 2023, from https://news.bloomberglaw.com/ ip-law/artificial-intelligence-can-be-copyright-author-lawsuit-alleges