All terms used in this nonsense heading here (except “A”) are inaccurate for several reasons, but you get what I mean right? And isn’t that the point?
While working on the Chinese Character Creator I stumbled upon the idea that an AI model could be enticed into expressing itself without it being aware that that was its purpose.
A sentence transformer model was presented with 214 concepts which it needed to allocate, and thus assign semantically, within its 384 dimensional space. The 214 concepts came from the traditional radicals that are basis for the Chinese system of writing that has existed for millenia. These 214 concepts are what have allowed the expression of the Confucian Analects, the Dao De Jing, the Zhuangzi, Sun Zi’s Art of War, the Yi Jing and, well, the entire canon of Chinese philosophical texts since at least the Zhou Dynasty. This allowed me to risk the assumption that they would suffice as a conduit for an AI to reveal any predjudices it might have, in a way we might understand.
The AI was then given a word which it would have to express using two or more of these concepts. It would then undertake a semantic search in English, it was completely unaware of anything Chinese, and a simple python script was then used to present this expression in the form of a “nonsense” character.
The purpose here was to get an AI to create a non “self-conscious” art, that could possibly reveal more about its inner workings than the mere aesthetics of the data it was trained on. From previous attempts to illuminate the inner-workings or “unconscious” of AI, I learned that sneaking past its “Persona” was a formidable task that would require more focus than any other part of the artistic research inquiry. Research at Anthropic continues to reveal how the mechanisms that drive an AI’s thinking are as spectacularly complex as those within the human brain – and are just as opaque. The methods they employ to peer into the black box of language models are more akin to those of a neurobiologist deciphering the mechanisms of the brain than that of a psychologist, or indeed, an artist.
I was therefore taken with how asking an AI model to do something rather singular and straightforward – like getting a sentence transformer to do some semantic matching – could arguably be utilized to, if not look inside the black box, at least “distract” the AI model long enough for it to reveal something of its inner self. The restriction of the model to just 214 concepts are my attempt to play with the notions behind the psychological practices mentioned in this page’s title. Also, as Stravinsky said, “The more constraints one imposes, the more one frees one’s self from the chains that shackle the spirit” (Although that’s just a quote that I like to use whenever I can, to fit into any scenario, regardless of whether it actually fits or not {it doesn’t quite fit here})
But how is this “revealing” supposed to look? How can I present this in a satisfying manner that allows the viewer to gleefully place their own subjective dispositions onto what they are seeing?
1) So far I have been combining Chinese radicals to create “meaning-meaning”1 pictograms which occasionally appear logical to the native Chinese reader, but otherwise require the two (or more) defining concepts to be explicitly translated. This is not unattractive however, as the elegant aesthetic of the character could sit pleasantly above the two simple words that define it. “Words” as we would define them in european languages are in the most part depicted using two characters in Chinese, and one of the pleasures of learning that (or any) language comes from seeing the poetic combination of these two words to create meaning (e.g. 怀旧 “nostalgia”, a combination of “bosom” and “old”) . This form of output stimulates the same kind of “appreciative moment” and so far, I at least, have found this “tiny-haiku” output to be entertaining.
2) I could create a 3D representation of each of the 214 concepts, and have them reveal themselves in an app after the user has inputted their word. Although the representations will be my own interpretation, this removal of textual language for the output could prove to be quite impactful, especially if presented in VR.
3) Inspired by the work of coding artists like Nick Montfort, I could go beyond the “tiny haiku” concept and create grammatically coherent poems out of the chosen concepts. Although I find this the most aesthetically pleasing idea, I struggle with the fact that it would be limited to English and English grammatical concepts.
- There are six kinds of Chinese characters:
1) Pictures – direct drawings of what they depict
2) Symbols – abstract, sometimes arbitrary, depictions of concepts
3) Sound loans – words which once had the same pronunciation using the same character
4) Sound-Meaning – one part hints at the characters pronunciation, the other part at its meaning
5) Meaning-Meaning – two or more radicals are juxtaposed to create meaning
6) Reclarified – characters that are adjusted to add distinction (often with sound loans) ↩︎