Recently social media has been abuzz with art and text generators driven by artificial intelligence (AI), leading consumers everywhere to question just what kind of robots we've let into our homes and lives.
In November 2022, leading AI research lab Open AI, released ChatGPT, an artificial intelligence language model that, when given a prompt, produces conversational and easily convincing text. On its heels came Lensa, an app from Prisma Labs which uses AI to transform selfies into avatars that make users look like they stepped out of a Marvel Comic Book.
And on the music side of things, Riffusion, created by Seth Forsgren and Hayk Martiros, used open-source AI tool Stable Diffusion to pair text with audio processing for music generation.
Such AI applications are captivating, pushing the boundaries of every day creation and commercial services. They are also raising a lot of questions around privacy, property rights, biased datasets and the future disruption of industries. And these concerns are sparking bigger conversations over the risks and benefits of AI technology.
Yes, AI runs the gamut of all uses, from work to entertainment. Let's have a look at the implications of AI, particularly the risks, benefits and ethical questions it arouses.
AI refers to intelligence harvested from computer systems that are being developed to perform tasks and improve themselves based on the information collected.
This kind of intelligence is in contrast to the intelligence found in humans and/or other organisms who possess problem solving, reasoning, and adaptive skills.
Across the industry, researchers tend to classify AI into three categories:
Narrow AI is designed to handle a single subject or narrow task and cannot perform additional tasks other than the rules programmed. For example Siri is a Narrow AI algorithm.
General AI (AGI) on the other hand is technology that can think like humans, process and solve complex tasks and attempt to solve problems better and better independently. This is deep learning and it imitates the workings of the human mind.
Last, SuperAI is even more meta and is considered to be systems of AI that supersede human ability.
So are we all evolving into cyborgs with lasers shooting out of our eyes? Nope, not yet. However, humans feel both a fear and fascination with AI and its potential. It is important to put AI into context because of all the hype that can sway the public into thinking we are headed into armageddon.
The benefit of AI is that it can solve complex problems with an ease of efficiency from evolving large data sources. It can shore up time and effort for other human investments and innovation. AI is finding great use cases in fields such as content creation, coding, autonomous driving, writing, and personalized learning. The risks, however, range from algorithmic biased data, job loss, to AI becoming an existential threat.
An honest concern is that AI training datasets encode biases. For example, women across social media spoke out and critiqued the app Lensa for outputting edited images of a more sexualized version of themselves and whitewashing people of color. The claim was that these AI algorithms perpetuated damaging stereotypes and did not produce accurate, meaningful outputs. However, the problem with AI is often more with the quality of the data being imputed into these systems. Algorithmic outputs are only as good as the quality of data being put in. Depending on who is inputting and designing the data, the results will reflect certain perspectives while limiting and potentially harming others. In order to support a more just world, it becomes imperative to include diverse points of views in the design and training of AI.
Meanwhile, other concerned citizens have spoken out about privacy and intellectual property rights as publicly available content on the internet is being used (including materials with nuanced copyright, trademarks, consent or compensation) to train unsupervised AI models. There is also ongoing conversation regarding whether AI generator tools are anti-creator and have the potential to replace this market space.
Sasha Stilles, a poet and artist nominated for the Pushcart Prize, Best of the Net, and Forward Prize, is a lifelong poet and AI researcher. Her work explores the intersection of text and technology, experimenting with language models (LMs) like GPT-2 and fine-tuned text generators. Her book Technelegy was written in collaboration with AI using her poetry as the foundational training data.
"I'm much less interested in the idea of being replaced by AI than in the prospect of being augmented- having our human capabilities expanded and elevated via intelligent systems," she said. To Stiles, there is a false binary between technology and humans. From fire, the wheel, printing press to the internet, human civilization is a direct product of advancing technologies. As a female poet, her work is doubly important as she strives to bring her perspective and encourage her peers who are currently underrepresented into the making of these systems.
Don’t panic. Applications for generative AI models are still relatively limited. However, fascinating use cases have been proposed for LMs including those for ChatGPT. OpenAI’s new bot can write college level essays, debug code, write funny jokes, play a role as a translation manager and even create apologetic William Shakespeare sounding sonnets. ChatGPT is great for tasks that are low stakes but still fail in sensitivity and specificity.
Princeton University scientists Arvind Narayan and Sayash Kappor wrote an article on Substack describing a foundation for defining tasks that ChatGPT and other LMs can do. They outline a few types of ideal AI tasks:
The downside is that sometimes, the bot sounds so convincing that people can misconstrue it to be the truth. The reliability rate on ChatGPT is still mediocre, as disclaimed on Open AI's website to any user who opens a trial account. Critics of AI argue that downstream effects of an authoritative-sounding bot with inaccurate outputs can push our current epistemology crisis even further into a black hole.
According to a paper by University of Washington researchers, several planned approaches must be in place to mitigate risks for AI language models. Researchers and AI trainers should carefully document their data sets, evaluating and auditing the effectiveness on a regular basis and holding themselves accountable for computational characteristics that run the risk of maleficence. Undocumented data fed into an AI learning system runs the risk of perpetuating harm without recourse or appeal.
Moving ethically forward with this technology calls for organizations to set standards of accountability and transparency. Governance around AI should scrutinize practices of data collection and audit for computer behaviors/designs that potentially create inaccurate and harmful effects. Understanding how AI models are being built and engaging in productive collaborative conversations will be essential.
So, BFFs — are you up for the challenge?
[Editor's note: CORRECTION: The original article claimed Amanda Hyslop interviewed University of Washington professor Emily M. Bender directly, when in fact she summarized learnings from Bender's published research (linked above), which had multiple authors.]
Amanda Hyslop (aka MizzuzB) is a writer in Web 3 exploring the intersections of culture, finance and technology. She is an avid NFT collector and advocate for women building on the blockchain.
This article and all the information in it does not constitute financial advice. If you don’t want to invest money or time in Web3, you don’t have to. As always: Do your own research.