Artificial intelligence has long been heralded as the technology of the future — but not without its fair share of criticism and fear. The rapid shift toward AI within the past year has elicited more concern than excitement. This apprehension is not exactly a new phenomenon. Dystopian movies, both new and old — such as “2001: A Space Odyssey” and “A.I. Artificial Intelligence” — possessed this same anti-AI sentiment. 

There is some merit to these ideas, as negative outcomes can come about from technological advancement — climate change, nuclear weapons and widespread misinformation, to name a few. However, these consequences distract from the numerous possibilities for positive change, such as AI’s potential to restore and interact with historical artifacts. We’ve thought of AI as a way to create, but its purpose right now should be using its unbridled potential to connect us with the past. 

AI and machine learning have been surprisingly effective in academic and artistic endeavors. Historians have used machine learning and neural networks to analyze centuries-old documents that have degraded or been smudged in storage. AI has also been useful in restoring lost artwork. For instance, the often discussed “Faculty Paintings” by Austrian painter Gustav Klimt. The paintings were plundered by Nazi Germany in World War II and subsequently burned, lost to time, besides black and white photographs, which could not do the originals justice. However, machine learning technology used historical accounts of the paintings, along with other works by Klimt, to colorize the black and white photos. The result gave researchers and the public a better idea of what the paintings looked like, while maintaining the integrity of the original work. 

This restoration process can also extend to other art forms, such as music. The Beatles, who broke up in 1970, recently released their “final” single, “Now and Then,” on Nov. 2, 2023, with a little help from AI. The famous quartet became a trio following the death of John Lennon in 1980, and a duo following the death of George Harrison in 2001. Surviving band members Paul McCartney and Ringo Starr, along with director Peter Jackson, used machine learning to enhance a low-quality home recording made by Lennon in the 1970s. Importantly, nothing new was generated by AI — McCartney and Starr used the extracted Lennon vocals to overdub new instrumental parts on top of Lennon’s home recording. The single received critical acclaim from critics who commended the song’s production and integrity. 

There are obvious drawbacks to this technology that are worth mentioning. Just as it is easy to interpret history in a new way through AI, it is arguably just as easy to falsify and obscure the true historical record. For instance, “deepfakes” have become commonplace on the internet, enlisting AI technology to create fake, or alter existing, historical media out of thin air. An example that received attention in 2020 was a computer-generated video of former President Richard Nixon announcing the failure of the Apollo 11 launch — in reality, Apollo 11 did reach the moon. This could lead to a host of problems. Another example is a now-common scam, enlisting deepfakes to solicit money by disguising the voice of a supposedly distressed family member in need of quick cash. However, there are many ideas for possible solutions, such as adding synthetic watermarks on AI images to identify them as computer-generated or developing secret code words among family members.

Some artists are notably against the use of AI in their field. Common grievances include the possible copyright that AI models infringe upon, as they rely on older, already-created artwork by humans for inspiration. Another commonly cited complaint is the fact that complex computer-generated illustrations may devalue human skill as a whole. These are obviously legitimate concerns and emblematic of the need to tread carefully when mapping out AI’s future. It also provides a compelling case for more active regulation in controlling AI — such measures, which many support, can remove many of the less desirable effects of AI, while leaving the door open for the many benefits.  

Using AI as a means of restoring and working with what has already been created should serve as a model for how it can be used both on college campuses and beyond. It is not, and should not be used as, an additive process, but a restorative and collaborative one. Much is still unclear as to how AI should be used in an academic setting. This approach to using AI, as a tool that complements human ingenuity, removes many of the ethical concerns inherent within other uses. Through these restorative modes, the future of technology should be a means for us to explore and navigate the past. While fears surrounding AI are valid, its efficacy and consequence are largely dictated by the intent and wisdom of its use. 

Hayden Buckfire is an Opinion Columnist who writes about American politics and culture. He can be reached at haybuck@umich.edu.



Source link

author-sign