Exploratory Visual Digital Character and Visual Digital Scene Design Using Artmaking Generative AI: Enhancing Story Problems and Other Pedagogical Narratives

Exploratory Visual Digital Character and Visual Digital Scene Design Using Artmaking Generative AI: Enhancing Story Problems and Other Pedagogical Narratives

Copyright: © 2024 |Pages: 15
DOI: 10.4018/979-8-3693-1950-5.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Static visual illustrations with characters and scenes play an important role in story problems and other pedagogical narratives. Such visuals may better engage learners, connect learners with the learning sequence, set the emotional tone, evoke settings, emphasize critical moments, and support the teaching and learning in other ways. With the popularization of artmaking generative AI, a practical question is how well this tool can make visual characters based on character design prompts for both animate and inanimate characters? What about the computerized drawing of scenes in which such characters may be placed, alone or in relation to each other? How difficult is it to prompt the generative AI to create consistent characters from different angles and perspectives? To create consistent scenes and backgrounds? This work explores how practically usable the AI-generated visuals are for the making of characters and scenes, with some light pre-production and post-production, as needed.
Chapter Preview
Top

Introduction

In the past decades, in higher education, there has been a shift from “text alone” to “text with visuals and multimedia” to harness learning benefits. Visuals capture and hold learner attention. They enhance memorability. They add multi-dimensionality to examples and explanations; in terms of dimensionality, think line, shapes, textures, colors, and other aspects. Visuals help make the learning—concepts, categories, processes, relationships, interrelationships, systems, and other elements—more approachable. They are used to explain data relationships. They provide insights into the natural and physical world, given what is seeable with the naked unaided eye and with technical augmentation—at micro-, meso-, and macro- scales. Core visuals used in story problems and other forms of narratives for learning may include a range of visuals, but one type includes characters (animate: humanoid, animal-ist; inanimate: robotic, plant, and others) in contexts (scenes). Much of the world is designed and communicated for human consumption. While many learning visuals are comprised of photos, many others are illustrations, re-enactments, and depictions from the human imaginary. In the current age, those latter visuals will emerge from the human-computer imaginary.

This work explores the usability of the Deep Dream Generator (originated in 2015, in v. 3 in 2024), an artmaking generative AI tool (A-GAI), in creating characters (typically in the foreground) and then the scene context (typically in the background). A-GAI tools may be designed in different ways, but many are based on deep learning models (based on various neural networks) that learn from large imagesets. The learning is based on unsupervised machine learning, and from the learning weights, the technology may be induced to output visuals based on text and / or image prompts, tags, and other inputs.

The research questions include the following:

  • RQ1: How usable is the Deep Dream Generator in creating static characters (animate and inanimate) for pedagogical purposes?

  • RQ2: How usable is the Deep Dream Generator in re-creating prior static characters (animate and inanimate) for pedagogical purposes?

  • RQ3: How usable is the Deep Dream Generator in creating static scenes (backgrounds) for pedagogical purposes?

  • RQ4: How usable is the Deep Dream Generator in re-creating prior static scenes (prior backgrounds) for pedagogical purposes?

Optimally, the visuals by the artmaking generative AI (A-GAI) would be of sufficient quality for use in learning materials. The technical tool should be flexible enough to not only create inspired characters and scenes but re-create necessary characters and scenes to enable the generation of image sequencing, reproductions in different scenarios. The characters and scenes here will be static ones, but animations may be created by setting up “joints” and “skeletons” in the characters, and then applying “walk cycles,” dance sequences, and other scripted motions. Motion overlays may be applied to the respective scenes using particle effects, weather overlays, and others. Specific motion programming may be applied to both the digital characters and the scenes using various commercially available digital puppet tools, digital image editing tools, animation tools, video tools, game engines, immersive virtual world tools, and others. Similarly, voice and soundtracks may be applied to the digital puppets, animations, and videos. As a side note, given that creations by generative AIs cannot be legally copyrighted stateside, such generated visuals are usable without payment in academic and other (even commercial) contexts.

Top

Review Of The Literature

The academic literature includes research on the making of various synthetic digital characters. In some ways, the most simple are the static visual ones. The main modality of communication for a static visual digital character is its appearance, how the character is visually styled. The “silhouette, shape, proportions and pose” are some central elements of visual character design as informed by personality (Fogelström, 2013, p. 4). Other visual concepts at play include the openness and closure of shapes, the context or gestalt, color theory, cultural practice, and others. The visual representation of a virtual character stands in for perhaps deeper knowledge of the character. The physical attractiveness of a character may be appealing for a time, but looks are a “fading trait” (Nieminen, 2017, p. 6).

Key Terms in this Chapter

Scene Design: The communication of a location and context (and even atmospherics) based on visual means.

Character Design: The communication of character identity through visual and other means.

Artmaking Generative AI: A technology that uses deep learning to train in large datasets of imagery that then can output visuals based on text and imagery singularly or in combination.

Complete Chapter List

Search this Book:
Reset