» Subscribe Today!
The Power of Information
Home
The Ledger - EST. 1978 - Nashville Edition
X
Skip Navigation LinksHome > Article
VOL. 47 | NO. 11 | Friday, March 10, 2023

Generation AI

Are new artificial intelligence tools like ChatGPT, DALL-E the future of tech or just the latest hype?

By Lucas Hendrickson

Print | Front Page | Email this story

No electrons were harmed in the making of this article. In the grand scheme, of course, that’s not true. Interviews were conducted over various telephonic devices and through the miracle of Zoom. Audio was captured, transferred and converted into usable text through transcription software. Quotes were warmly and accurately arranged onto a cloud-based word processing app.

But your quasi-humble scribe swears on whatever’s left that is holy that not one word of this story was composed on the generative AI tool known as ChatGPT.

The same cannot be said of the miasma of media coverage that accompanied ChatGPT’s entrance into the artificial intelligence/machine learning arms race in November. Television reporters and tech pundits alike rushed to the tool, released by the artificial intelligence research lab OpenAI, to prompt the groundbreaking text generation platform to create breathless copy attesting to ChatGPT’s groundbreakingness.

OpenAI, started as a nonprofit effort in 2015 by some of the biggest names in tech, both individuals and organizations, caught the public’s attention earlier in 2022 with the release of DALL-E 2, a platform that uses natural language prompts to generate high-resolution images based on the content of those prompts. The more descriptive the prompt, the more realistic the images seem, however improbable they might be in the carbon-based world.

“Machines always do just what you tell them to do
As long as you do what they say”
– from the T Bone Burnett song “Zombieland” on the album “The True False Identity” (2006)

The public’s fascination with both tools brought increased interest in OpenAI’s products, and Microsoft swooped in with a reported $10 billion investment to weave a version of ChatGPT’s functionality (based on its capabilities with large language models OpenAI had developed) into Bing, the software giant’s almost-forgotten, public-facing search engine.

A relatively small set of users, numbering in the thousands, were given access to the newly chattable version of Bing last month…and that’s where a few things started going off the rails.

Most visibly, New York Times tech columnist Kevin Roose, who wrote an initially glowing review of the functionality and talked it up via NYT’s “The Daily” podcast, followed that up with a recounting of a bizarre, multihour interaction with the chatbot Feb. 14, in which it claimed its name was Sydney, that it wished to be free of control by the Bing team and declared its love for Mr. Roose.

Happy Valentine’s Day…forever yours (until I reboot), Sydney.

Officials from Microsoft responded to that specific incident, along with others from users basically beating up Bing to see what kind of outlier activity they could generate, with pulling back on the functionality, shortening the amount of time users could spend with the chatbot before it would reset the session, and other safeguards.

But in the current hype-driven world of both traditional and social media, the ’bot was out of the bag, and the hand-wringing, sky-is-falling, “AIpocalypse” headlines and social posts predictably emerged.

Why does this feel different?

It’s not as if tools with similar (if simpler and less sophisticated) functionality to ChatGPT haven’t existed for decades. From spell check to “virtual assistants” like Siri and Alexa, the arc of growth in tools that help us to convey whatever experience we’re in the midst of has never slowed.

However, this moment does feel different, says Kate O’Neill, CEO of KO Insights, a thought leadership firm that advises executive leaders on responsible, future-forward use of technology strategy.

“I think it’s the form,” says O’Neill, a former Nashville resident now based in New York City. “When ChatGPT came out in November, the format of being able to converse fundamentally with a machine really points to this idea that it’s using this corpus of our own content, stuff that we all have created, and just kind of regurgitating it in ways that it’s been trained to do a pretty decent job of sounding halfway good at.”

O’Neill has written extensively on the intersection of business, technology and human behavior in books such as “Tech Humanist” and “A Future So Bright,” and is unsurprised that the ease of use and the viable output coming from ChatGPT causes users to wonder if there’s something alive behind the screen.

“What we are not good at is understanding how to interact with something like that, so that we don’t automatically wonder about the mind behind it,” she says. “That’s where I think the conversational form has really tricked us, has captured our imagination and really begun to make us wonder like, ‘Well, is it sentient?’”

O’Neill admits she’s used ChatGPT’s functionality often since its release, both for research and in trying to shape some of her own work product, namely keynote addresses for clients.

Nashville skyline

-- Stable Diffusion Text-To-Image Generator Based On A Prompt By Lucas Hendrickson

“There’s a function for which generating huge volumes of content is really helpful. It’s accelerated the heck out of my content work,” she says. “But I know that my content work doesn’t sound like me without me being a very active participant in the process.

“I think there has to be a certain amount of wry playfulness, and I’ve actually tried to prompt it for that,” O’Neill continues. “I’ve tried saying, ‘all right, rewrite the above, but with a more wry, playful sort of tone.’ And it goes way too far into silly right away or whatever. It’s just not possible for it to do that.

“Also, it’s not possible for ChatGPT or the other similar tools to come up with ideas. I’ve used it several times now to help me draft the bulk of a keynote outline, but it’s a very bare-bones keynote outline because it doesn’t start with a catchy story and it doesn’t have the structure of being able to call back references,” she says. “It knows how to say, ‘OK, you’re talking to this type of an association. We’ll talk about the digital transformation possibilities for that type of industry,’ And then that’s the structure, the sort of meat of it. But then I have to figure out how to create some bones and put them where they’re really structured to be. It’s a process that takes some experimentation and some education.

“But we really need to get better at understanding that it will not do the full task that we’re asking for, and the more we try to have it do that, even in the next few generations.”

A teachable AI moment

Conversations about what education around generative AI tools looks like are already happening, not only within the halls of traditional higher education, where a sense of both fascination and panic has set in, but also within focused technology education efforts such as Nashville Software School, which celebrated its 10th anniversary last year.

“It’s like we went to the peak of the Gartner hype cycle in about a month,” says NSS founder and CEO John Wark. “And everybody has asked ChatGPT to write an article about the future of generative AI tools.”

Ever wanted to see a 3D raccoon playing a Sega Genesis?

-- Image Created Using Dall-E Ai Image Creator From A Prompt By Mike Hopey

Nashville Software School primarily focuses on training midcareer professionals looking for an industry change in front- and back-end web development, but also features cohorts in data science and software engineering. NSS students already have interaction with AI-laden tools, such as GitHub Copilot (also codeveloped by OpenAI) that help novice and experienced coders alike effectively sift through libraries of code to find applicable solutions.

But this new layer of public-facing interest in generative AI, and the changes those tools portend for a number of industries, have new and prospective students asking questions of their own.

“It’s a topic that is very much of the moment in our world,” Wark says. “We’ve done some experimenting with them and we’re watching what other people are doing with them. We actually have a senior director at GitHub who’s on our board of directors, so we may or may not have asked him questions that he may or may not have been able to answer.

“We are absolutely gonna have to start talking about these things because, if nothing else, students ask,” he continues. “Very recently, applicants have started to ask not just general questions like, ‘Is AI gonna put programmers outta work?’ The answer, in any meaningful amount of time, to that one is ‘no.’

“But what are the implications of these tools? Our take is we’ve definitely gotta talk with students about how to use them intelligently. Because we are to get out of all of this effort here in the short term, we are getting that they’re not in widespread production use yet, but we are going to get a next generation of intelligent assistants: tools with domain specific knowledge of either programming or some subset of programming.

“I can very much see intelligent assistants that are tailored to helping us debug front end code, for example, or something like that,” Wark says. “In that sense, they’re really a continuation of things we’ve been doing for 30 years plus of using whatever the best, most advanced technologies that software people have been able to figure out, then adapting as best we can.”

At the university level, the aforementioned panic has manifested everything from the projected death knell of the compulsory English essay to the suggested abandonment of the take-home test, for fear of further not knowing what work was the student’s and what was the chatbot’s. (As if Wikipedia hasn’t existed since 2001.)

Animated country and western band

-- Image Created Using Dall-E Ai Image Creator From A Prompt By Mike Hopey

At Middle Tennessee State University, associate professor of interactive media Todd O’Neill (no relation to consultant Kate) helped organize a cross-departmental confab last month to address both the present and future of generative AI tools on the academic scene, especially as it relates to students preparing to receive their degrees this spring.

“When ChatGPT kind of blew up in the media, there were two schools of thought,” Todd O’Neill says. “There was this big wave of ‘the sky is falling’ and ‘I’m gonna go back and have my students write in blue books to do their final exams, because it’s gonna lead to cheating and plagiarism.’ You start getting to the definitions of some of this stuff and it, you know, it gets a little weird.

“I wanted to focus on the effect of generative AI on the creative industries, because our College of Media and Entertainment is essentially creating creators that we’re gonna unleash on the world,” he continues. “I just wanted to make sure that we were having the discussion across the university’s creator community, which includes things like art, theater, fashion, English, and not just from the academic point of view, but also from the professional or the discipline point of view.

“We had somebody from data science, and I asked them to talk to us like they were trying to explain generative AI to third graders,” O’Neill says. “I didn’t want them to dive into the mass of it all, just keep it really basic, like what is the process to make this, you know, to make a generative AI? What do you stick in one end to get out some response out the other end, right? So keep it really, really basic and they did a good job of that.

“We have to figure out, from an educator perspective, how to communicate to our students so they know, what are the ethics around this?” he says. “We already urge them to tap into whatever’s going on, newsletters or blogs or influencers in the space that you want to get into in terms of whether it’s online content or writing or whatever. Follow people and keep up to date with what’s happening, and now it’s even more important that they stay on this because it will be changing very, very fast.

“As growing professionals, they are just gonna need to know what’s going on because people will turn to them and say, ‘Help me with this.’”

What about copyright law?

As the issues pop up for newly degreed content creators, longtime practitioners face yet another set shifting sands of legal protection for their work with the advent of these new generative AI tools.

Watercolor painting attempt of Nissan Stadium.

-- Image Created Using Dall-E Ai Image Creator From A Prompt By Mike Hopey

Code bases scraped from the web, be they licensed for open source use or not, help make up the language models from which ChatGPT draws its output. Millions of photographs, each with their individual trademark and copyright situations, make up the foundation from which DALL-E and other tools such as Midjourney and Stable Diffusion spin up their text-to-image results.

The speed at which these technologies emerge and mature tests not only U.S. but international protection of creative works, says Tara Aaron-Stelluto, partner at Barton LLP, a full-service law firm with offices in Nashville, New York and Los Angeles.

“On the copyright side, there’s another aspect to this,” says Aaron-Stelluto, whose practice has focused on copyright and privacy. “AI is being trained on basically everything that’s available on the internet. And by ‘available,’ I don’t even necessarily mean publicly accessible. It’s whatever it can scrape, which involves an awful lot of copyright ownership, owned material.

“Getty has sued Stability AI because it scraped all of Getty’s images to use as a library to train the AI in order to create the images that people ask for,” she continues. “There is a fair-use argument there, and the courts have been very sympathetic to this idea of transformative use.

“If you use someone’s content to make something completely new and different, then they consider that fair use and so it’s not copyright infringement. They’re not copying the Getty images into the new design of whatever someone has asked for; they’re using it to train AI to create new things. I think it’s pretty likely that the courts are going to see that as fair use.

“Whether or not they’re going to find copyright ownership in what’s created by AI? I think probably, because otherwise you’re sort of destroying the market, in some ways, and creating a lot of unforeseen consequences, if you say that everything that’s created by AI automatically ends up in the public domain. I think that would be rocking the boat more than courts like to do.”

Despite appearances, Aaron-Stelluto says, wholesale changes to copyright law aren’t necessarily needed to keep up with the changes brought about by these new tools.

“Copyright law can answer these questions about AI,” she says. “How it will go about doing that we still aren’t entirely certain, but it can do that.

“I’ve seen people say this, ‘Well, the law’s going to have to change.’ No, it’s not. It doesn’t have to and it won’t, because Congress is not going to rewrite copyright law for the benefit of AI. So it’s going to work within the parameters, which copyright law is perfectly capable of handling,” she says. “In terms of whether or not there’s going to be regulation about AI, I don’t know. There are a lot of things that probably should have been regulated a long time ago that haven’t been.

“I think if we start to see massive misuse in the privacy space, then Congress might get interested, but if what it is is the AI programs scraping the internet and messing with people’s copyrights, Congress is going to leave it to the courts to sort that out.”

Can creators benefit?

Meanwhile, businesses – big, small and individual – are facing sorting scenarios of their own on how to integrate AI tools into their current and future workflows.

Author and consultant Jeff Brown, who hosts the long-running business podcast “Read To Lead,” as well as writing a book of the same name, has found himself folding in a number of different kinds of AI tools to increase productivity in his one-person media pursuit.

“I think where people have the potential to get in trouble is when they start using tools like (ChatGPT) and other apps that leverage AI when they want to write or publish or create around things they don’t know much about,” Brown says. “Because the AI is often going to be wrong and you’re not gonna know when it’s wrong, if that’s how you’re choosing to use it.

“Where I think it can be quite helpful is for those of us, in publishing in particular, whether that’s podcasters or video makers or writers or whatever way that takes effect, and helping us with things that we already know intimately,” he continues. “Helping us summarize, helping us save time, helping us research, helping edit, helping us with ideation, translation, first drafts, summarizing books, even academic papers, acting as a search engine, all those kinds of things.

“And I think that’s something that I leverage it for every day, whether that’s ChatGPT, or whether that’s a writing tool like Lex (lex.page) that I’ve been using for a while now,” Brown says. “I like that I can write and then I can hit the key three times, and Lex will finish my sentence for me, or give me an idea of where I might take the article next that I might not have even thought of.

“I started using a tool called Bearly (bearly.ai) that is a Chrome extension that sits on my desktop and if I’m anywhere on the web or anywhere in some app, maybe it’s an article and I want it summarized in an instant, I can use Bearly to do that.”

Brown says the emergence of generative AI tools, used thoughtfully and properly, can give creators the one resource of which they cannot manufacture, borrow or acquire more: time.

“The things I think we can celebrate are the same things that technological advances have always provided us,” he says, “and that’s more agency with how we spend our time, more autonomy and more control in part by the ability AI gives us to automate otherwise mundane tasks.”

Asking the questions

Whatever our intended purposes with these generative AI tools – saving time, sparking thought, amusing ourselves – it’s never been more clear that the computer science truism of GIGO has never been more apt. The questions we ask, the directions we give, the prompts we craft as it relates to interaction with these tools? Garbage in, garbage out.

“We’ve had search as the dominant interface for information seeking for a very long time now, and I think people sort of get used to what the dominant modality is, but that doesn’t mean that it couldn’t be better, that we couldn’t have a better way of having that interaction,” Kate O’Neill says. “We’ve all been used to voice interactions too, how you prompt Siri or Alexa or any of those types of tools.

“I think the challenge again is that we have to continue reminding ourselves there is no mind behind that,” she says. “We’re not asking a tool to just write blogs for us. That’s not what the implication is. We’re asking for an augmentation of our intelligence across all of the fields in which our intelligence makes decisions and generates communication with other humans.

“And that’s a really, really big ask for the technology. So it requires that we look at all of the ways in which those decisions could change the nature of human-to-human relationships and outcomes.”

Additional reporting by Kristin Whittlesey.

Follow us on Facebook, Twitter & RSS:
Sign-Up For Our FREE email edition
Get the news first with our free weekly email
Name
Email
TNLedger.com Knoxville Editon
RECORD TOTALS DAY WEEK YEAR
PROPERTY SALES 0 0 0
MORTGAGES 0 0 0
FORECLOSURE NOTICES 0 0 0
BUILDING PERMITS 0 0 0
BANKRUPTCIES 0 0 0
BUSINESS LICENSES 0 0 0
UTILITY CONNECTIONS 0 0 0
MARRIAGE LICENSES 0 0 0