Free Online FOOD for MIND & HUNGER - DO GOOD 😊 PURIFY MIND.To live like free birds 🐦 🦢 🦅 grow fruits 🍍 🍊 🥑 🥭 🍇 🍌 🍎 🍉 🍒 🍑 🥝 vegetables 🥦 🥕 🥗 🥬 🥔 🍆 🥜 🎃 🫑 🍅🍜 🧅 🍄 🍝 🥗 🥒 🌽 🍏 🫑 🌳 🍓 🍊 🥥 🌵 🍈 🌰 🇧🇧 🫐 🍅 🍐 🫒Plants 🌱in pots 🪴 along with Meditative Mindful Swimming 🏊‍♂️ to Attain NIBBĀNA the Eternal Bliss.
Kushinara NIBBĀNA Bhumi Pagoda White Home, Puniya Bhumi Bengaluru, Prabuddha Bharat International.
Categories:

Archives:
Meta:
November 2024
M T W T F S S
« Jan    
 123
45678910
11121314151617
18192021222324
252627282930  
11/14/14
1326 LESSON 151114SATURDAY FREE ONLINE E-Nālanda Research and Practice UNIVERSITY run by http: sarvajan.ambedkar.org The Pali Canon - Experimental Journey into Tipitaka Memorization and Mnemonics-Omnidirectional 3D Visualization for Analysis of a Large-scale Corpus: The Tripitaka Koreana” Please render correct translation in your mother tongue and other languages you know to this Google translation. Practice and share to become Sotta Panna i.e., stream enterer and be happy to attain Eternal Bliss as Final Goal.
Filed under: General
Posted by: site admin @ 7:20 pm





1326 LESSON 151114SATURDAY

FREE ONLINE E-Nālanda Research and Practice UNIVERSITY
run by

http:
sarvajan.ambedkar.org

The
Pali Canon -
Experimental Journey into Tipitaka Memorization and Mnemonics-Omnidirectional 3D Visualization for Analysis of a Large-scale Corpus: The Tripitaka Koreana”

Please
render correct translation in your mother tongue  and other languages
you know to this Google translation. Practice and share to become Sotta
Panna i.e., stream enterer and be happy to attain Eternal Bliss as Final
Goal.




http://www.buddhanet.net/e-learning/history/s_theracanon.htm

Buddhist Studies

The Pali Canon 

The
Pali Canon is the complete scripture collection of the
Theravada school. As such, it is the only set of scriptures
preserved in the language of its composition. It is
called the Tipitaka

or “Three Baskets” because it includes the
Vinaya Pitaka or “Basket of Discipline,”
the Sutta Pitaka or “Basket of Discourses,”
and the Abhidhamma Pitaka or “Basket of
Higher Teachings”.



Chart of Tipitaka http://dhammadharo.wordpress.com/


Memorizing the Tipitaka

Experimental Journey into Tipitaka Memorization and Mnemonics



Tipitakadhara Examination in Burma

Yesterday I found this very interesting website, explaining details about the Tipitakadhara examination in Burma:

http://atbu.org/node/26

Among a list of the curriculum for memorizing the Tipitaka by heart, it says:

An argument may arise that nowadays, with the Buddha’s
words already inscribed on palm-leaf, folding book, stone slab, ink
print, books and even in CDs, the bearing by heart of his words is
unnecessary. The physical inscription and the mental impression at heart are not the same.
The former is useful only in presence of the user for it might vanish
anytime. There is no benefit whatsoever when the physical inscriptions
cannot be obtained at will.

However, impressions retained at heart of the Buddha’s words
benefit one whenever they are recollected, being helpful at any time.
Such a person is able to walk straight on the path of Dhamma, while
being helpful to his surroundings
. Output equals the input concerning the learning process to bring up Tipitakadhara.

When learning by heart the Pâli Texts, one personally “meets” with their possessor the Buddha,
bringing into oneself the infinite Attributes of him. In oneself the
adoration for and conviction in the Buddha grow overwhelmingly, leading
to the missionary inclination by way of prolonging the Buddhist spirit
and teaching the Path. Insight grows in one while learning the Pali
Texts in conjunction with the elaborative Commentaries and
Sub-commentaries. Brimful with the adoration for and appreciation of the
Buddha’s attributes, wisdom and perfections, one is never bound to
deviate from his teachings. With the consciousness at heart about the
benefit to oneself and fellow beings the victorious earner of
“Tipitakadhara, Tipitakakovida” will always be beautifying the world,
carrying the Banner of Victory in Dhamma.

Now, this is a very very tough examination. The attempt is done to
memorize the Tipitaka by heart. As you can imagine, there are not many
who are able to succeed:

In the long (59 years) story of Tipitakadhara Examination the candidates enlisted numbered 7103, the actual participants 5474, partially passed 1662, but only 11 have been awarded the Tipitakadhara Tipiíakakovida title. Among those outstanding theras 4
have passed away. The departed might now have attained the supreme
bliss, the Deathless, or being reborn in the celestial abodes, they
might be discoursing on Buddha Dhamma there. The remaining 3 Tipitakadharas probably will pass the written, interpretative examinations in near future and obtain the Tipiíakakovida,
thus ending their long and arduous journey of sitting for these
examinations. Again, some of the remaining candidates are indeed bearers
of one main division of Pitakas… One-Pitaka-passed candidates now number up to 114, two-Pitaka-passed 13 and 2½-Piíaka-passed 5.

These numbers are still amazing.

Just last week I had an interesting discussion with a young monk from
Sri Lanka who explained the drawback in recording Dhamma talks. He said
his teacher explained that they had seen people being less attentive
and concentrated when Dhamma talks where recorded – it seemed to be
almost an excuse to postpone the training. From there we went to books,
thinking that a book is something like that, postponing you from
memorizing the instructions of the Buddha. “Maybe that is why”,
reflected the young monk, “the Buddha had his monks memorize his
teachings – so that they would practice them more immediate and
directly”.

 



Width vs Depth

Just came across the following passage about the Burmese
Tipitakadhara examination which tests monks for their memorization
abilities:

Thus,
the Tipitakadhara Examination is one of the longest and toughest
examinations in the world. When the first Tipitakadhara Examination was
held, the Venerable Mingun Sayādaw was one of over one hundred monks
invited to observe the proceedings. When the result was a disappointment
with no candidate successful, he resolved to repay the nation’s debt in
search of a hero of the Pariyatti Sāsana. He set about the task
systematically. He took up the Pāli Canon passage by passage, book by
book. He first set out to understand the passage thinking in Myanmar and
in Pāli. He broke the passage into sentences, paragraphs or sections
according to the degree of difficulty. If necessary, he noted the number
of modifications and variations in the selected pieces. He read aloud
each section five times, then closing the book, he repeated what he had
just recited. If he was hesitant or felt he had not mastered the passage
he would open the book and read aloud five more times. If it was
recalled smoothly he would recite it ten times and then pass on to the
next passage. In the evenings when reciting the day’s passages he would
not do it alone but request some other monk to check with the open book.
This ensured that he did not pass over any word, phrase or sentence and
that each declension was correct.

When
two or three books had been mastered he would set aside each evening
two or three periods required for their recall and recitation. The
intention was to go through the finished books simultaneously so that
the mind would be active in all the books at the same time and all
interrelationships would be discerned

I wasn’t sure whether I would continue learning each Dhammapada
chapter individually (like I did with Chapter 13), but I eventually
decided against it. Simultaneously cycling through all chapters and
adding 3 verses at a time forces me to go over the entire Dhammapada all
the time and strengthen the “story” structure of each Dhammapada
chapter.

Right now I am in the second cycle of adding 3 more verses (so 6
total) for each Dhammapada chapter (except chapter 13 which I finished
completely).

The only benefit I can see with taking them one chapter at a time is
that you have more “finished” blocks you can look back on. Still, it
might be disheartening to see how many chapters are still in front of
you, while with my current method I have the “feeling” I already “have”
the Dhammapada memorized – just 3 verses deep, and all I have to do is
add “the few verses missing” (psychologically speaking).

It is very interesting how these verses grow on you. I wonder if and
what impact this memorization process had on the Burmese masters.




26 stories in 26 days

Done. Not with the entire Dhammapada, but the first milestone: to
learn 3 verses of each chapter. That is 26 chapters times 3 = 78
Dhammapada verses (in Pali) in 26 days. Not bad, considering the last
time I memorized something of this size was over 15 years ago.

Today too I would like to share some observations about this whole
process (memorizing Pali). As outlined in my earlier posts, I created
stories (strings of associations) which formed around the meaning of the
Dhammapada verses as well as peg words which help me to identify the
exact verse number and order of verses within chapters.

The following are “my” 26 short stories which evolved through trying
to pull peg words and meaning of the verses into extra-ordinary, funny,
interesting “tales”:

1.) does not have one…(sorry, I am sooo used to the first 3
verses…this will get a story eventually, when it gets more complicated.)
2.) a cow kneeing feeding on a pasture and meditating
3.) observing little mummies walking up a mountain with a volcanic lake
4.) a rower in a river of flowers
5.) a foolish thief stuck on a skyscraper
6.) a wise FBI detective caught in an underground prison
7.) a holy cow driving a bus
8.) a poisonous ivy’s fruits being stolen by Mike Tyson
9.) a deadly bee attacking a hot dog
10.) a sprint over living logs floating on the Danube
11.) diving into the dark ocean where a sunken village crumbles
12.) a small forsaken island with survivors fighting against a flower
13.) a river dam who decides to go on a hike after having a deja vu
14.) a pool of tar with the Buddha reaching out to pull people to the shore
15.) a happy dog whose tail looks like a cigar and who was rescued from lots of angry dogs
16.) a newsboy throwing dishes into people’s (which look like little nuts) drive ways
17.) the same angry bee digging a whole which almost makes a coach, pulled by a baby, break
18.) a dove, pale and yellow, which is close to death and decides to make its PhD but dies in a desert in New Mexico
19.) a very very wise talking bath tub
20.) a money seeking dog on the right path
21.) a net of nerves on which gigantic tarantulas chase Sindbad
22.) a nun giving a massage and a monk wearing the robes like a mask
23.) an elephant in a mensa following a naga serpent and walking over giant mint leaves
24.) emperor Nero travelling through time and hugging a creeper with leaves made of jam jars
25.) a bhikkhu sitting on a board of nails (meditating) while someone tries to disturb him with a lit matches
26.) a seemingly conceited brahmin who tries to cross over a stream quickly but loses against two friends in a moving truck

This will make it hard for anyone to forget. The necessity of having
to find associations between peg words (dove = 18) and the verse number
(animal = 235) and the topic (Mala = taint) and the verse itself (in
this case being close to death) lead to some VERY funny VERY creative
stories.

After about a week I realized that Mr Lorayne was absolutely right in
his book when I emphasized again and again that you have to visually
SEE your association – even if it is just for a brief moment. More than
once I was stuck and not able to repeat a particular line because I had
not created any visual imaging for that particular line. Now I have to
clarify: I did NOT create a visual image for each verse line in every
verse. Some of the verses went into memory like butter, they stuck right
away, they were very very easy to recollect, even after long time, but
others I had a really hard time with. Those which were hard, I realized I
needed to identify the crucial words, the key words, which would help
me pull the remainder of a line or phrase into memory – and then I would
create a picture around those key concepts/words in my mind.

This really helped. One of the problems I saw was that when a line
was very abstract I tried to dismiss it and “hope” that the memory of
the visual for the line before or after would do the trick and help me
later recall that particular verse line. Sometimes that actually does
work in many cases it did not. The solution to this problem was easy: I
had to do the “word substitution” trick and find a similar sounding more
visual idea in another language I was familiar with and turn that into
the theme for the particular verse. To give you an example: In
Dhammapada verse Mummel (335) the first line has the Pali word “jammi”
which I was not used to and which was hard to recall. So I would turn
that into “jam” and imagine Nero (chapter 24 = Tanha, or Craving) as
part monkey (from the first verse in this chapter) who cuddles (German:
mummel) with a creeper on which jars of jam grow (jammi). As you can
see, this visual is LOADED with meaning. Now it is not so much a problem
of not forgetting this weird story – but rather a question of decoding
it properly!

In most cases I tried to transform or direct my association according
to the general idea behind the verse. The verse ideas therefore were
like the script into which I had to creatively force the peg words
through means of associations and substitute words.

One word regarding the amount of repetitions. Overall I maybe
repeated each of these 98 verses 5-10 times. Not at the same time but
within the last 3 weeks. The power of these visual images allows for a
much more relaxed approach to repetition cycles. In fact when learning a
verse one evening sometimes a whole day would pass before I was able to
repeat the same verse again and was always surprised with how less of
an effort it was able to recall the correct verse. At that point it adds
to your confidence and the next repetition can take place after an even
greater interval. It is clear that if you casually bring these stories
to your mind eventually the whole picture/visual gets clearer (that is
something else I found very important to work on: trying to make the
visual story more and more crisp through each repetition) and will
become part of long term memory.

It is a strangely positive feeling to see the mind WANTING to learn
yet another verse after already loading it up with 3 verses within a
short period of time and having almost no difficulty in going up and
down, word by word, over the text thus committed to memory.

Even more important and interesting are of course the results these
verses have on your Dhamma life. I found them to be like a voice of the
Buddha talking to me. For instance the other day I left a place were I
listened to two people talk – in a very unrestrained way – and sometimes
something which is meant in a funny way can hurt other people. Driving
back home the Bhikkhu Vagga verse shot into my memory, all by itself –
“kayena samvaro sadhu, sadhu vacaya samvaro” and I felt a much deeper
appreciation for the meaning and relevance of what these words had to
say. It was also interesting to note once again, as mentioned in the
last post, to see how the stream of sankharas started to change my
perception on certain things trough the re-evaluation in the light of
the Buddha’s words of wisdom. Here another one: yo ce gatha satam bhase,
anatthapadasamhita – eka gathapadam seyyo, yam sutva upasammati – the
word upasammati expresses a beautiful idea: that the ultimate purpose of
words should be the end of words – the silence of the mind! what a
contrast to the gossipy internet-faring mind
;-)

Enough for today. If someone likes to get a detailed explanation of
how the above 26 short stories encode Dhammapada verses I am happy to
share. Don’t worry if all of this makes absolutely no sense to you
;-)




The singing roach

Today marks the end of nearly 2 weeks of memorization efforts with
the Dhammapada using mnemonic principles to speed up the process of
memorization and to make sure that I would not forget what I learned.

After I had finished learning the names of the chapters of the
Dhammapada by heart and after finishing the first verse of each chapter
last week, I had to think how to continue mapping out the Dhammapada
“mentally”.

I decided to continue with the “top down” approach and started
learning two more verses in each chapter. This way, so my reflection, I
would be able to work on all chapters in parallel. This would give me
the feeling that I was quickly progressing on the entire Dhammapada –
rather than feeling “stuck” in the very beginning.

Right at this moment, today, I finished the 2nd and 3rd verse of
chapter 17, Kodha-vagga. That means of chapters 1-17 I now know 3 verses
by heart and of the remaining 9 chapters one verse each.

Over the past few days as this little self-study took on shape, I was
wondering quite often how this engaged activity of learning Pali verses
by heart would affect me and my meditation practice. I was sure it
would, but I was wondering how and if I’d notice it at all.

Then, one day, I suddenly saw at least one side effect: In the
morning I overheard a conversation and later that day my mind came back
to it. While thinking about it a Dhammapada verse manifested itself in
my mind – obviously pulled in through the association of what had been
the subject of my investigation and the meaning which the verse
captured. It was interesting to see, how the observation about something
very mundane turned into a deeper reflection after my mind connected
both the experience with the word of the Buddha. It became clear to me,
that if this would happen more often and quite naturally, the
interpretation of certain experience must change over time. I guess
there is nothing mysterious about this observation, in fact we do that
everyday – however in a more uncontrolled fashion impacted by greed,
hatred and delusion.

To give you a practical example: In a discussion the question came up
how to respond to a person who had gone behind the back of his team
members and broke a promise he had given before. Immediately the
following Dhammapada verse sprang to my mind and was showing a possible
way of action which seemed very enlightened:

Akodhena jine kodham, asadhum sadhuna jine

Jine kaddariyam danena, saccenalikavadinam

Dhp 223

Through non-anger defeat the angry one, through goodness defeat the bad;

Defeat the miser through giving and the liar through speaking the truth.

To show you how the linking of a story continues to work excellent in
my efforts to learn the Dhammapada by heart based on mnemonic
principles, I would like to continue my “picture story” of the Puppha or
Flower chapter (No. 4) in the Dhammapada.

You will probably NOT understand why the following makes such an
amazing difference if you never experienced linking associations
consciously or never employed a peg system to memorize number. Anyhow,
here is the puppha vagga story which helps me remember the 3 beginning
verses (so far) of the puppha vagga:

The puppha vagga is chapter number four in the Dhammapada, because four in the Major System (which unambiguously maps Phonemes against numbers) stands for “Ra”
(a peg word with just one sound in it, namely “r” and thus can only
refer to one number and that is 4) – and in my mind I imagine the
Egyptian sun god Ra to walk through a field of flowers in which only his
head his visible.

Now, out of that first picture I imagine that there is a rower  (verse no. 44) in a boat traveling on that ocean of flowers.

Ko imam pathavim vicessati, yamalokanca imam sadevakam

Ko dhammapadam sudesitam, kusalo pupphamiva pacessati.

In my mind my mental “camera” then moves to the “shore” of this “river of flowers” where I see someone standing on the rail (verse no. 45) who listened to what the guy in the boat just said and replies:

Sekho pathavim imam vicessati, yamalokanca imam sadevakam

Sekho dhammapadam sudesitam, kusalo pupphamiva pacessati.

Finally, my “mental camera” moves back to the person in the boat, but now I see a little roach 
(verse no. 46) standing in front of the boat singing the following
lines (In order to not forget any of the lines of this verse which is a
bit harder for me to remember, I mapped each line of the verse into the
story – sometimes I do that, sometimes I don’t. But so far I found that I
HAVE to do this, if I come across a verse which not readily stays in
memory. It is paramount to not losing the context and flow and having
something to connect my associations):

So the roach starts to sing:

Phenupamam kayam imam viditva

“(Having seen this body as a lump of foam” – at which point the roach points to a lump of foam in the river.

Maricidhammam abhisambudhano

“Having awoken to the fact that (it) is of the nature of a mirage” –
at which point in the song I see the roach flicker as if it itself is
just a mirage

Chetvana marassa papupphakani

“Having broken Maras poisenous arrows” – here I see the roach
fighting arrows off which are shot at him from all sides with his tiny
sword.

Adassanam maccurajassa gaccha

“May walk unseen by the king of death” – and finally the little roach walks over the water and suddenly disappears.

You can understand, that if someone would ask you, “do you know Dhammapada verse 46″ – that it would make you smile ;-) before you even give the answer, which I am sure, would be correct.

And if, for instance, you ask me what the verse 78 in the Dhammapada
would be I first map the number back to a work and arrive at  cave 
and thus immediately know the association of the cave with a person who
is hiding behind paper-boards (Pappe) and someone tells him not to hide
behind the small boards but rather the large ones which triggers my
memory of the following Dhammapada verse:

Na bhaje papake mitte

Na bhaje purisadhame

Bhajetha mitte kalyane

Bhajetha purisuttame

One might think that this is complicated, but in fact, it is an
amazingly simple trick to remember ad-hoc a verse deep in the middle of a
book while at the same time being able to retrieve the verse number
through an unambiguous visual/mental encoding schemata where the picture
of a cave can only stand for the number 78.




First (real) target: The Dhammapada

After the initial testing of Pali, Mnemonics and Harry Lorayne’s
tips I waited one full week to see how long those stories would stay in
my long term memory and how often I would have to recall them in order
to make them easily accessible. Mnemonic techniques were working their
magic and even though my habits from pure rote learning made me “feel”
like I had to recall the learned subject over and over again, I was
always amazed how perfectly the story sat in my memory and how easy it
was to access the lists.

During that week I did some research on my next object for
memorization. First I was undecided between the Parayana vagga of the
Sutta Nipata and the Dhammapada. Obviously the Dhammapada was quite a
big undertaking (423 verses) but of course I had my hopes up that my
teenage memorization of it would help me and make it easier to “refresh”
and then better organize it using the nearly learned tools of
memorization.

The Parayana vagga is another long term friend of mine, a chapter
capturing some of the most in-depth ideas of Buddhist philosophy and
meditation and I had tried earlier, 10 years ago, to learn some of it by
heart, but never made it beyond the 4th or 5th sutta (of 16).

I finally decided to go for the Dhammapada. I was curious to see if
it was true what many memory “acrobats” said that “you never forget you
just can’t find it” and I also decided to test this whole mnemonics
meets Tipitaka idea on an actual book of the Tipitaka. It was curious to
me whether another top to bottom approach would work in first mapping
out the book structure and then filling it with life.

So the first thing I had to do was to learn the 26 books of the
Dhammapada by heart. Now this did not seem to be such a big jump
anymore, based on my previous experience with the books of the Tipitaka,
but I faced one problem, which I had to solve, because it would even
get bigger the further I came: direct access
;-)

Let’s say someone would ask me to chant the 16th chapter of the
Dhammapada…how would I know which one was the 16th? Would I want to
count the chapter names, walking through a simple associated list of
visuals? Or, let’s say I would remember a verse in a given situation,
and wanted to share it with others or just lookup translations myself,
how would I know how to locate that particular passage?

It was clear that my next endeavor needed the application of the
Major system or a similar peg system which I had just started to use
based on Harry Lorayne’s instructions.

So I started to learn the chapter names of the Dhammapada by heart
identifying each chapter (for instance toes (10) with Danda and nail(25)
with Bhikkhu) when I realized that the number pegs were going to be
needed in many other books/contexts (if I were to continue this project
of mine) and so I needed something else to make these pegs and their
names relate to the Dhammapada. For that purpose (quite naturally) I
lined up the chapter names on a hiking path which I knew very well and
added a touch of  the loci system to make these particular items belong
to the visual representation of the Dhammapada which I was about to fill
with details.

Following the top-bottom approach I started with the first verse in
each of the 26 chapters. My encoding seemed to be a challenging task:

  • At the point in the path where I would identify the chapter of the Dhammapada with a peg and an association
  • I took a sideturn and imagined the peg for the verse-number becoming
    part of the overall chapter heading and turning into the first eposide
    of the initial verse in each chapter.

Let me give you an example at this point to make it sound less theoretical:

The number 4 in the Major system is represented by the letter “r” (think: four). My peg for this letter is the word “Ra” – and I visualize the Egyptian sun god.

Now the fourth chapter in the Dhammapada is the “puppha” or flower chapter (my luck, I don’t have to look this up anymore ;-)

So in my mind I visualized the Egyptian sun god walking through a
field of flowers, where only his head sticks out of this ocean of
flowers.

The verse number of the initial verse in this “Flower chapter” is 44.
A 44 is represented by the letters (rr) and the peg word according to
Lorayne’s list in his “The Memory Book” is “rower” (r-owe-r). You need
to really understand the idea behind the Major System in order to
realize why that is such a cool system and makes it extremely easy to
remember numbers.

What I had to do next was to somehow visualize a “rower” and
“flowers”. This I did, in that I “saw” the rower in his boat row on top
of the ocean of flowers as if he was on water.

Now came the final and tricky part. The four line verse itself. The verse goes like this:

“Ko imam pathavim vicessati, yamalokam imam sadevakam. Ko dhammapadam sudesitam, kusalo pupphamiva pacessati?”

As you may have guessed, this too came from memory. What was to be
done? I imagined how the rower would look left and right towards the
“banks”  of the flower-river wondering about the meaning of the verse.
Here at this point I saw how it helped that I had learned the verses 15
years ago – it was very easy to memorize them, they felt extremely
familiar. The most important fact was that I needed triggers to find the
key words to the verses encoded in my story – something which I
initially did not realize as much as later on.

Within 3 hours I was able to fill the grid of 23 beginning verses of
each chapter (out of 26 chapters) and finished that day with amazed at
how interesting it was to learn the Dhammapada this way. It seemed
almost effortlessly compared to the pains of rote memorization and I
caught myself wondering quite often that it was impossible to still
remember so many verses after just “loading them up” in such a short
time, but still, the little stories which kept the verses tight closely
to the topic of the chapter and tagged by the verse number pegs brought
back all of them. The next day I finished the full list of 26 verses –
the first one of each chapter – and I knew for the next weekend I was
ready to try something more challenging.




Dealing with the length of the Smaller Collection

Fired up by the ability to recall the list of the seven Abhidhamma
books backwards and forwards using the little mnemonic pocket trick of
creating a linked association list, I decided to devote my intention
that same evening towards the last challenge (or so I thought) –
memorizing all the books in the Khuddaka Nikaya (Smaller Collection of
the Sutta Pitaka) and memorize them in order.

For I could (with enough time) probably had listed most of the books
in the Khuddaka Nikaya (as the Sutta Pitaka was always my main area of
interest) it still was intriguing to see if I was able to build an
extended list of strange Pali names into an unforgettable association.
You might guess the answer – of course it is. Here is what I came up
with:

It all starts out with a small (Khuddaka, for Khuddakapatha the first book) really microscopic book which I can identify as a Dhammapada using a magnifier. That Dhammapada has a mouth and can talk and it shouts (Udana, actually means proclaim): “This is what the Buddha has said (Iti vutta, for Itivuttaka)” and it points to the Sutta-Nipata. Right at that moment, flying (Vimanavatthu) on two open books arrive hungry ghosts (Petavatthu). In the distance encircling me I see a semi circle of the most noble Arahants sitting and chanting (Thera- and Therigatha) while behind them in space (and time) even less known Theras and Theris (Thera/Theri-Apadana) are sitting. I try to look even further back in time and see the former Buddhas (Buddhavamsa) and their examplary behavior (Cariyapitaka) and further down the timeline I see the Jatakas.
Then, the “mental camera” returns to the topic of the Sutta-Nipata
which was just “proclaimed” by the tiny Dhammapada and I see two
elephants, a large and a baby one (Mahaniddesa and Culaniddesa) carrying the meaning (niddesa) of the Suttanipata side by side towards a crossroads (Patisambhiddamagga). There they are being led (Nettipakarana) by a little walking basket (Petakopadesa) towards the goal of their journey, the Milindapanha).

You can imagine that the above story lives from the fact that Pali
words and meaning do make sense to me and so become natural part of the
story telling or settings. However, later I found that it helped
tremendously when I was able to find a substitute word (similar
sounding) to the Pali and capture the substitutes meaning making it part
of the (Pali) story I try to remember. More on that next.




Linking the Abhidhamma

So I looked at the books in the Abhidhamma and thought – wow, how
can I possibly create a story out of these names. I knew that if I was
going to use more mnemonics for a larger part of my efforts in
memorizing parts of the Tipitaka, that this would be the first real
world test: Can these mnemonic tools actually work on something like a
“foreign language” (my efforts where directed in learning the Tipitaka
in Pali) and secondly on something so different than from what the
mnemonic tools seemed to be used for.

Here is the list and it took me about 20 minutes this time to come up with a story:

  • Dhammasangani,
  • Vibhanga,
  • Dhatukatha,
  • Puggalapannyatti,
  • Kathavatthu,
  • Yamaka,
  • Patthana.

I am pretty sure you know by now that these 7 names came from my
memory. I did not have to look them up. They are burned into my head –
all just because of the silly story I connected them with and as long as
I don’t forget that story, I won’t forget those seven names. Here is
the story (and if, for whatever exciting reason, you ever want to learn
them by heart, you probably have to come up with your own story, which
might come more natural to you).

So what I imagined was several books with heads, arms and heads,
sitting around a table playing cards. Now “sangani” in Pali can mean
“count together” and I imagined that one of these venerable card-playing
Abhidhamma books was “counting the Dhamma coins” together, amassing
them in one big heap in the middle of the table. Then another talking
book, the Vibhanga started dividing (vibhanga in Pali) the heap of money
into three equal amounts. Suddenly the Dhatukatha book, one of the
players shouted “let’s do something (here I use German where “Da-tu”
reminded me of “there do” something) with the money. So the books hand
the money over to one of the players who is also a book but has dozens
of heads sticking out of it. That is the puggala-pannyatti (puggala
means person and I think of all the persons that book represents with
its multiple heads). The puggala-pannyatti takes the Dhamma chips and
starts walking towards the Kathavatthu with whom it wants to start a
conversation (katha in Pali) what to best do with the money when all of a
sudden the God of shadows (Yama) shows up hovering about him and
scolding him about playing cards. Yama (reminds me of the Yamaka) points
to the ground and the poor Puggala-Pannyatti realizes that it only
stands (Patthana) on a small remaining piece of rock while the earth
around it is crumbling and falling into Lava – it does not look good for
the gamblers…
:-)

This story fulfilled several mnemonic criteria which helped me make
it unforgettable: I use unlikely events, unlikely weird objects and
strange situations but still manage to weave them into a story. If you
think that there is too much pathos and too many cartoonish elements in
it – you are right. That is exactly what makes it work. At that point I
discovered another interesting fact: It was actually quite easy to come
up with weird associations because I was able to use multiple languages
and Western-Eastern (sometimes contradicting) symbols and tie them into a
story. That weird combination challenged my visualization and interest
and made it even more likely to remember the list.




Enter the Structure

Eventually it dawned on me that what I needed to do first was to
get a proper structure of the terrain into my mind. Something like a
map, a “mind-map” of the Tipitaka. The idea was not to start in the
first book, first sentence (like a bottom up approach) but to start out
with the structure of the Tipitaka, the sceleton. Then, from that point
onwards I would “flesh out” the different books, chapters,
verses/passages I would want to keep in memory and build a “tree of
knowledge”.

So one weekend I decided to start very simple and just learn the book names of the Tipitaka by heart.

Now, a short glance at any content listing of the Tipitaka will tell
you that this is not a too hard thing todo, especially if you just read
“The Memory Book” and had lots  of training in linking items.

Now the linking of items works in such a way that you try to connect
the things you want to learn (in this case “names of books”) and weave
them into funny mental stories. Now these stories will be different for
each person as things which help me remember for instance the
“Parajikavagga” of the Vinaya will mean nothing to you. Still, let me
show you how I started and you will get the idea:

The books of the Vinaya are:

Parajikavagga, Pacittiyavagga, Mahavagga, Culavagga, Parivara.

One might say, this is such an easy list, I don’t need any fancy
technique to learn them. That is true. For this list, but then again,
how can you make sure that in 40 years from now your chances of
remembering this list are as high as possible? The trick that Mnemonics
teaches you let me to create a funny little story in my mind – and the
importing thing is to really really visualize each item (even if just
for a second) in such a way that they remind you of the item.

So I started with “Parajika” and turned that (after 10 minutes of
thinking what this could be pictured as) into an “Indian king who sat on
a throne always saying pa-pa-pa” (stuttering raja) – which of course
reminds me then of “Pa-raj-(ika)”. Suddenly a “pa-cheetah” (Pacittiya)
jumps from behind a curtain towards the king. The king sees the cheetah
and jumps out of a window where he lands on a huge elephant (maha,
meaning big in Pali) floating in the air next to a liliput elephant
(cula). Both of them are surrounded (encircled) by a band of golden
light (pari-vara, can mean circumference).

And that’s it. It’s a silly funny little story but captures all the 5 books for the Vinaya (for me).

Then I thought. Well, that was easy, I can sure do the same with the
Abhidhamma. Even though I had a pretty good knowledge of the makeup of
the Sutta and roughly of the Vinaya I always struggled with the
Abhidhamma books. This was going to be the first real challenge.





How the journey began

Yaññadeva, bhikkhave, bhikkhu bahulamanuvitakketi anuvicāreti, tathā tathā nati hoti cetaso

Majjhima Nikaya

satthā dhammaṃ deseti, aññataro vā garuṭṭhāniyo sabrahmacārī, api ca
kho yathāsutaṃ yathāpariyattaṃ dhammaṃ vitthārena paresaṃ deseti. Yathā
yathā, bhikkhave, bhikkhu yathāsutaṃ yathāpariyattaṃ dhammaṃ vitthārena
paresaṃ deseti tathā tathā so tasmiṃ dhamme atthapaṭisaṃvedī ca hoti
dhammapaṭisaṃvedī ca. Tassa atthapaṭisaṃvedino dhammapaṭisaṃvedino
pāmojjaṃ jāyati. Pamuditassa pīti jāyati. Pītimanassa kāyo passambhati.
Passaddhakāyo sukhaṃ vedeti. Sukhino cittaṃ samādhiyati. Idaṃ,
bhikkhave, dutiyaṃ vimuttāyatanaṃ yattha bhikkhuno appamattassa ātāpino
pahitattassa viharato avimuttaṃ vā cittaṃ vimuccati, aparikkhīṇā vā
āsavā parikkhayaṃ gacchanti, ananuppattaṃ vā anuttaraṃ yogakkhemaṃ
anupāpuṇāti.

Anguttara Nikaya

You are absentminded when your mind is absent; when you perform
actions unconsciously, without thinking…we see with our eyes, but we
observe with our minds. If your mind is “absent” when performing an
action, there can be no observation; more important, there can be no
Original Awareness…The solution to the problem of absentmindedness is
both simple and obvious: All you have to do is to be sure to think of
what you are doing during the moment in which you are doing it…There’s
only one way, and that is by using association. Since association forces
Original Awareness-and since being Originally Aware is the same as
having something register in your mind in the first place, at the moment
it occurs-then forming an instant association must solve the problem of
absentmindedness.

“The Memory Book”

Chances are that if you are an ardent reader of the Sutta Pitaka the
thought of learning the word of the Buddha by heart comes  quite
natural. For centuries Buddhist lay people and monks transferred the
knowledge and word of the Awakened One and his teaching through space
and time by no other means than their memory.

The other day I was listening to a series of Dhamma talks, at a
retreat event, and this very inspiring young bhikkhu, whose modesty
prevents me from even mentioning is name, mentioned the following idea a
couple of times: “What are we, if not a collection of our memories.”
(He did not mean this in the more philosophical sense, but rather
worldly context) and he finished “how much would our lives change, if
our perceptions, triggered by memories of the words of the Awakened One
change, would be different ones…filled with more enlightened thoughts”.
This particular monk’s tradition put a lot of effort in making
contemplation on the word of the Buddha a subject for their meditation
and made me look back at my beginning years as a Buddhist.

For when I was a young teenager, first getting in touch with the
magic word of the Buddha, I was so fascinated by it (including the
stories of those Arahants who all did know the Dhamma by heart – not
just in realization but also verbatim from the lips of the Buddha) that I
undertook the strange project to memorize the Dhammapada by heart. I
was 16 when I started, with nothing more equipped than Ven.
Nyanatiloka’s translation and my simple rote memorization efforts.

In fact it was Kurt Schmidt’s Pali primer, who first got me into
learning Pali by heart. In his small booklet,which was my first
introduction to Pali, right from the start, he encourages the student to
consolidate his miniscule Pali knowledge in each chapter by memorizing a
few stanzas. From there I went on learning the 3 famous suttas, many
months later and under lots of effort, because they simply were chanted
worldwide in all Theravada circles and I thought it would be “nice” to
know at least those texts by heart – if the monk was able to chant them
by heart, why not me?

And then, a year later, it was the Dhammapada which I entered into.
It was a slow, ardous task. Every morning, I would learn a verse, repeat
it. Look it up. Repeat it, look it up, repeat it, look it up – over and
over again. Probably for half an hour before going to school. Then the
next day, I would check if I still remembered the last one, and if,
would move to the next. At the end repeating both… When I felt that a
verse was stuck in memory, I skipped repeating it.

That way, in the most ordinary and innocent school-poem-memorization
way, I proceeded over the length of 1.5 years and made it up to the 23rd
(of 26 chapters). But even then wholes had formed in my memory of the
Dhammapada and it seemed like sisyphos work to be able to keep all of it
accessible and alive.

This was years before the internet become mainstream, and so Tony
Buzan – memorization tricks and mnemonics were unknown unknowns to me.
(I think, when it came to memorizing, it did not even occur to me that
there might be better ways of doing it than, well, memorizing).

This was over 15 years ago.

When the monk’s Dhamma talk inspired me to look back into the
possible benefits of learning some of the teachings of the Buddha
verbatim I thought of it as an interesting self-experiment. It was
intriguing to find out how the exercise of learning and carrying the
Buddha’s word as an act of meditative attention would benefit me. But I
knew – this time – I was going to go about it in a more scientific
manner.

During the years I had read some of Tony Buzan’s books and at
University got familiar with the various systems of association, link
building etc. But for whatever reason, it never really hit me and became
part of my habits. Something which I regretted – very much like the
fact that of the Dhammapada memorization efforts only 2-4 verses were
left in my memory. Or so I thought.

The first thing an internet user in the year 2011 does if he wants to
venture into a new area of learning is of course to google the subject
untiringly. That’s what I did. For about a week I was skimming online
websites about Tony Buzan (that’s where I started out) and various
systems for memorization – I knew their theory, but they did not mean
anything to me at that time other than academic theories – only used by
weird people in weird public memorization appearances.

Then I came across Josh Foers book “Moonwalking with Einstein”. I
knew it was not a book on the art of mnemonics, which I was actually
looking for, as I still had no clue has how to connect mnemonics of the
21st century with the task of memorizing the Tipitaka – but that was
actually what I intended to do – or at least attempt.

Josh’s book was a great motivational source. In a very accessible way
he showcases his own personal journey into mnemonic techniques and
during the process describes the subcultur of mnemonics as a sport of
competition. Among the many interesting people and experiences he
relates there was the one I kept coming back: A Chinese Dr who had
memorized the entire Oxford English dictionary, some 50,000 words. In an
online video on youtube he explains how he did it. And that really
helped me in my pursuit of finding a way to “handle” the Tipitaka or, on
a smaller scale, at least the memorization of a page, a chapter or a
book in verbatim.

Because, the unfortunate fact remained that even while I was
devouring Josh’s book, the online google search for learning books
verbatim where very limited. In fact most resources deal with the Quran,
which is learnt just by rote repetition and a few Christian pages which
don’t do much differently – sometimes provide an additional repetition
plan a la flashcards.

But I was looking for something else.

It dawned on me, that the first thing I had to do, was to update my
knowledge on the basic memorization techniques as described by Josh and
as I had studied long time ago from Buzan, but which were long forgotten
– well, I knew them, but then again, I wanted some kind of proper
guidance in refreshing my knowledge about them.

So I went to Amazon and looked at the reviews and ordered the book
“The memory book” from Harry Lorayne and Jerry Lucas. Now, this book
looked very old (in fact it was from the 80s) but the cover mentioned
that over 2 Mio copies were in print (in the newest edition). I can
believe that now. There is something about it which taught me more on
memorization than all the Tony Buzan books together which I ever read –
and that might not actually be Buzan’s fault but my own inadequacy in
“getting it” in those days or from him.

After the first few chapters in which Harry Lorayne explains the
basic principles of making things memorable and linking items together I
was astonished (once again) how good this mnemonic stuff worked and
disappointed at myself not having found this the day I was born.

Still 2 weeks passed, while I was studying “The memory book” while I
was thinking and thinking how to best apply such a system to the
verbatim memorization of longer passages, especially with the intention
to recall them at will. More in my next post.



http://www.academia.edu/7535775/Omnidirectional_3D_Visualization_for_Analysis_of_a_Large-scale_Corpus_The_Tripitaka_Koreana_



Omnidirectional 3D Visualization for Analysis of a Large-scale Corpus: The Tripitaka Koreana”

Omnidirectional 3D Visualization for the Analysis of Large-scale Textual Corpora: Tripitaka Koreana

Dr
Sarah Kenderdine ALiVE, City University Hong Kong SAR, China
skenderd@cityu.edu.hk Prof Lew Lancaster ECAI, Berkeley University San
Francisco  buddhst@berkeley.edu Howie Lan ECAI, Berkeley University San
Francisco howielan@ socrates.berkeley.edu Tobias Gremmler City
University Hong Kong SAR, China gremmler@ syncon-d.com

 Abstract
 —
 
This
paper presents the research and development of a new omnispatial
visualization framework for the collaborative interrogation of the
world’s largest textual canon, using the worlds’ first panoramic
stereoscopic visualization environment - the Advanced Visualization and
Interaction Environment (AVIE). The work is being undertaken at a new
research facility, The Applied Laboratory for Interactive Visualization
and Embodiment (ALiVE), City University of Hong Kong. The dataset used
is the Chinese Buddhist Canon, Koryo version (Tripitaka Koreana) in
classical Chinese, the largest single corpus with 52 million glyphs
carved on 83,000 printing blocks in 13th century Korea. The digitized
version of this Canon (a project led by Berkeley University) contains
metadata that links to geospatial positions, contextual images of
locations referenced in the text, and to the original rubbings of the
wooden blocks. Each character has been abstracted to a ‘blue dot’ to
enable rapid search and pattern visualization. Omnispatial refers to the
ability to distribute this data in 360-degrees around the user where
the virtually presented visual space is in three dimensions (3D). The
project’s omnidirectional interactive techniques for corpora
representation and interrogation offer a unique framework for enhanced
cognition and perception in the analysis of this dataset.
 Keywords-digital humanities; immersive visualization; visual analytics; computational linguistics

I.
 
INTRODUCTION

 Research
into new modalities of visualizing data is essential for a world
producing and consuming digital data (which is predominantly textual
data) at unprecedented scales [22, 37]. Computational linguistics is
providing many of the analytics tools required for the mining of digital
texts (e.g. [43, 44]). The first international workshop for intelligent
interface to text visualization recently took place in Hong Kong, 2010
[32]
.
In the last five years, the visual analytics field has
grown exponentially and its core challenges for the upcoming five years
are clearly articulated [20, 40, 45]. It has been recognized that
existing techniques for interaction design in visual analytics rely upon
visual metaphors developed more than a decade ago [24] such as dynamic
graphs, charts, maps, and plots. Moreover, interactive, immersive and
collaborative techniques to explore large-scale datasets lack adequate
experimental development essential to the construction of knowledge in
analytic discourse [40]. Recent visualization research is constrained to
2D desktop screens and the ensuing interactions of “clicking”,
“dragging” and “rotating” [32, 43]. Furthermore, the number of pixels
available to the user is a critical limiting factor in the human
cognition of data visualizations [22], resulting in the recent
development of gigapixel displays (e.g. Powerwall, StarCave, see [7]).
 
The project described in this paper,

 Blue Dots AVIE

,
exploits the opportunities offered by immersive 3D techniques to
enhance collaborative cognitive exploration and interrogation of high
dimensional datasets within the domain of the digital humanities [18].
The research takes core challenges of visualizing this large-scale
humanities data inside a unique 360-degree 3D interactive virtual
environment - AVIE [1] to provide powerful modalities for an omnispatial
exploration responding to the need for embodied interaction,
knowledge-based interfaces, collaboration, cognition and perception
[40]. The research is taking place at a new facility, Applied Laboratory
for Interactive Visualization and Embodiment (ALiVE) located at the
Hong Kong Science Park [4].
 
A.
 
Visual analytics

This
research project responds to core challenges and  potentials identified
in Visual Analytics [24, 48]. Visual analytics is a rapidly expanding
field applied to business intelligence, market analysis, strategic
controlling, security and risk management, health care and
biotechnology, automotive industry, environmental and climate research,
as well as other disciplines of natural, social, and economic sciences
(see [9, 11, 24, 48]). Websites such asVisual Complexity [70],
and Flowing Data [12], and mainstream  projects such as Many Eyes [36],
GapMinder [13] and, Wordle [50] attest to the increasing interest in
information visualization by multiple disciplines.
 
 B.
 
Corpora visualization

Most
previous work in text visualization focused on one of two areas,
visualizing repetitions, and visualizing collocations. The former shows
how frequently, and where,  particular words are repeated, and the
latter describes the characteristics of the linguistic “neighborhood” in
which these words occur. Word clouds are a popular visualization
technique whereby words are shown in font sizes corresponding to their
frequencies in the document. It can also show changes in frequencies of
words through time [17] and in different organizations [6], and emotions
in different geographical locations [15].
 
The significance of a
word also lies in the locations at which it occurs. Tools such
asTextArc [39], Blue Dots [28 - 31] and Arc Diagrams [48] visualize
these “word clusters” but are constrained by the small window size of a
desktop monitor. In a concordance or “keyword-in-context” search, the
user supplies one or more query words, and the search engine returns a
list of sentence segments in which those words occur. IBM’s Many Eyes
displays the context with suffix trees, thereby visualizing the most
frequent n-grams following a particular word [49]. In the digital
humanities, words and text strings is the typical mode of representation
of mass corpora. However new modes of lexical visualization are
emerging such as Visnomad [46], a dynamic visualization tool for
comparing one text with another, and the Visualization of the Bible
 (Figure 2) by Chris Harrison where each of the 63,779 cross references
found in the Bible are depicted by a single arc whose color corresponds
to the distance between the two chapters [16].

Figure 1. Visualizing the Bible, Chris Harrison ().

C.
 
Gesture
based computing The future for Visual Analytics is closely related to
HCI and its development is related to gesture based computing for data
retrieval [21]. Microsoft’s  Project Natal  [42] and Pranav Mistry (MIT)
Six Sense
 [41] are examples of increasing use of intuitive devices that promote kinesthetic embodied relationships with data.
 
D.
 
 High
definition immersive visualization Visualization systems for
large-scale data sets are increasingly focused on effectively
representing their many levels of complexity. This includes gigapixel
tiled displays such as HIPerSpace
 at Calit2 [19]

(Figure 1)

and,
next generation immersive virtual reality systems such as StarCAVE  (UC
San Diego) [7] and Allosphere (UC Santa Barabara) [2]. In the
humanities, Cultural Analytics uses computer-based techniques for
quantitative analysis and interactive visualization employed in
sciences, to analyze massive multi-modal cultural data sets on
gigapixels screens [35].

Figure 2. Using a unique 287 megapixel HIPerSpace at Calit2 (San Diego) for Manga research.

 E.
 
 Advanced Visualization and Interaction Environment

The
Advanced Visualization and Interaction Environment (AVIE [1]) is the
UNSW iCinema Research Centre’s landmark 360-degree stereoscopic
interactive visualization environment spaces. [38] An updated
active-stereo projection system together with camera tracking is
installed at ALiVE and forms part of the core infrastructure for the
Lab. The base configuration is a cylindrical projection screen 4 meters
high and 10 meters in diameter, a 12-channel stereoscopic projection
system and a 14.2 surround sound audio system

(Figure 3).

AVIE’s
immersive mixed reality capability articulates an embodied interactive
relationship  between the viewers and the projected information spaces.
This system is used for the corpora visualization being undertaken at
ALiVE. In 2010, the social network visualizations of the Gaoseng Zhuan
Corpus (biographies) [8] were integrated into the AVIE system as a
prototype [26]. On the web, the social networks are displayed as spring
loaded nodal points (JavaScript applet; Figure 4). However for immersive
architectures such as AVIE, there is no intuitive way to visualizing
such social network as a 2D relationship graph in the virtual 3D space
provided by the architecture itself. Therefore this project proposed an
effective mapping that  projects the 2D relationship graph in to 3D
virtual space and  provide an immersive visualization and interaction
system


 
for intuitive visualizing and exploring of
social network using the immersive architecture AVIE. The basic concept
is to map the center of the graph (the user) to the center of the
virtual 3D world and project the graph horizontally on the ground of the
virtual space. This mapping provides an immersive visualization such
that the more related nodes will  be closer to the center of the virtual
world (where the user is standing and operating) Figure 5.
Figure 3.
Advanced Visualization and Interaction Environment. Image © ALiVE,
CityU Figure 4. Social Networks of Eminent Buddhists JavaScript applet ©
Digital Archives, Dharma Drum Buddhist College. Figure 5. Social
Networks of Eminent Buddhists, nodal networks distributed in 360-degrees
AVIE © ALiVE, CityU.

II.
 
THE BLUE DOTS

 Blue Dots
AVIE builds upon the Blue Dots visualization metaphor developed for the
interrogation of the Buddhist Canon in which each character is
represented as a blue dot [28 - 31]. This version of the Buddhist Cannon
is inscribed as UNESCO World Heritage enshrined in Haeinsa, Korea. The
166,000 pages of rubbings from the wooden printing  blocks constitute
the oldest complete set of the corpus in  print format (Figures 6 &
7). Divided into 1,514 individual texts the version has a complexity
that is challenging since the texts represent translations from Indic
languages into Chinese over a 1000-year period (2nd-11th centuries).
This is the world’s largest single corpus containing over 50 million
glyphs and it was digitized and encoded by Prof Lew Lancaster and his
team in a project that started in the 70s.

 A.
 
Summary of content

 
1.504 texts

 
160.465 pages

 
52.000.000 glyphs

 
1 text includes 107 pages (34674 glyphs)

 
1 page includes 324 glyphs arranged in 23 rows and 14 columns
 B.
 
Contextual information

 
1.504 colophons with titles, translators, dates,  places, and other information.

 
202 people names (translators, authors, compilers)

 
98 monastery names


    •    Job Board
    •    About
    •    Press
    •    Blog
    •    Stories
    •    We’re hiring!
    •    Help
    •    Terms
    •    Privacy
    •    Copyright
    •    Send us Feedback
    •    Academia ©2014


http://www.museumsandtheweb.com/mw2011/papers/cultural_data_sculpting_omni_spatial_visualiza.html

Museums and the Web 2011: the international conference for culture and heritage on-line


Cultural Data Sculpting: Omni-spatial Visualization for Large Scale Heterogeneous Datasets

Sarah Kenderdine, Applied Laboratory of Interactive
Visualization and Embodiment, CityU Hong Kong, Special Projects, Museum
Victoria, Australia; and Tim Hart, Information, Multimedia and
Technology, Museum Victoria, Australia

Abstract

The rapid growth in participant culture
(embodied by Web 2.0) has seen creative production overtake data access
as the primary motive for interaction with museum databases and online
collections. The meaning of diverse bodies of data is increasingly
expressed through the user’s creative exploration and re-application of
data, rather than through the simple access to information, and presents
the museum with theoretical, experimental and technological challenges.
In addition, the opportunities offered by interactive 3D technologies
in combination with visual analytics for enhanced cognitive exploration
and interrogation of rich multimedia data still need to be realized
within the domain of the digital humanities.

This paper presents four research projects currently underway to develop new omni-spatial visualization strategies.

Keywords: 3D, immersive, information visualization, interactive narrative, museum collection, archaeology, corpora,

1. Introduction

This paper presents four research projects currently underway to
develop new omni-spatial visualization strategies for the collaborative
interrogation of large-scale heterogeneous cultural datasets using the
worlds’ first 360-degree stereoscopic visualization environment
(Advanced Visualization and Interaction Environment – AVIE). The AVIE
system enables visualization modalities through full body immersion,
stereoscopy, spatialized sound and camera-based tracking. The research
integrates groundbreaking work by a renowned group of international
investigators in virtual environment design, immersive interactivity,
information visualization, museology, visual analytics and computational
linguistics. The work is being implemented at the newly established
world-leading research facility, City University’s Applied Laboratory
for Interactive Visualization and Embodiment – ALIVE) in association
with partners Museum Victoria (Melbourne), iCinema Centre, UNSW
(Sydney), ZKM Centre for Art and Media (Karlsruhe), UC Berkeley (USA),
UC Merced (USA) and the Dharma Drum Buddhist College (Taiwan). The
applications are intended for museum visitors and for humanities
researchers. They are: 1) Data Sculpture Museum; 2) Rhizome of the Western Han; 3) Blue Dots (Tripitaka Koreana) and, 4) the Social Networks of Eminent Buddhists
(Biographies from Gaoseng Zhuan). The research establishes new
paradigms for interaction with future web-based content as situated
embodied experiences.

The rapid growth in participant culture embodied by Web2.0 has seen
creative production overtake basic access as the primary motive for
interaction with databases, archives and search engines (Manovich 2008).
The meaning of diverse bodies of data is increasingly expressed through
the user’s intuitive exploration and re-application of that data,
rather than simply access to information (NSF 2007). This demand for
creative engagement poses significant experimental and theoretical
challenges for the memory institutions and the storehouse of cultural
archives (Del Favero et al. 2009). The structural model that has emerged
from the Internet exemplifies a database paradigm where accessibility
and engagement is constrained to point and click techniques where each
link is the node of interactivity. Indeed, the possibility of more
expressive potentials for interactivity and alternative modalities for
exploring and representing data has been passed by, largely ignored
(Kenderdine 2010). In considering alternatives, this paper explores
situated experiments emerging from the expanded cinematic that
articulate for cultural archives a reformulation of database
interaction, narrative recombination and analytic visualization.

The challenges of what can be described as cultural data sculpting
following from Zhao & Van Moere’s ‘data sculpting’ (2008) are
currently being explored at a new research facility, the Applied
Laboratory of Interactive Visualization and Embodiment (ALiVE),
co-directed by Dr. Sarah Kenderdine and Prof. Jeffrey Shaw (http://www.cityu.edu.hk/alive).
It has been established under the auspices of CityU Hong Kong and is
located at the Hong Kong Science Park (http://www.hkstp.org). ALiVE
builds on creative innovations that have been made over the last ten
years at the UNSW iCinema Research Centre, at the ZKM Centre for Art and
Media, and at Museum Victoria. ALiVE has a transdisciplinary research
strategy. Core themes include aesthetics of interactive narrative;
kinaesthetic and enactive dimensions of tangible and intangible cultural
heritage; visual analytics for mass heterogeneous datasets; and
immersive multi-player exertion-based gaming. Embedded in the title of
ALiVE are the physicality of data and the importance of sensory and
phenomenological engagement in an irreducible ensemble with the world.

Laboratories such as ALiVE can act as nodes of the cultural imaginary
of our time. Throughout the arts and sciences, new media technologies
are allowing practitioners the opportunity for cultural innovation and
knowledge transformation. As media archaeologist Siegfried Zielinski
says, the challenge for contemporary practitioners engaged with these
technologies is not to produce “more cleverly packaged information of
what we know already, what fills us with ennui, or tries to harmonize
what is not yet harmonious” (Zielinski 2006, p.280). Instead, Zielinski
celebrates those who, inside the laboratories of current media praxis,
“understand the invitation to experiment and to continue working on the
impossibility of the perfect interface” (Zielinski 2006, p.259). To
research the ‘perfect interface’ is to work across heterogeneity and to
encourage “dramaturgies of difference” (Zielinski 2006, p.259).
Zielinski also calls for ruptures in the bureaucratization of the
creative and cultural industries of which museums are key stakeholders.

At the intersections of culture and new technologies, Zielinski
observes that the media interface is both: “…poetry and techne capable
of rendering accessible expressions of being in the world, oscillating
between formalization and computation, and intuition and imagination”
(Zielinski 2006, p.277). And as philosopher Gaston Bachelard describes
in L’Invitation au Voyage, imagination is not simply about the forming of images; rather:

… the faculty of deforming the images, of freeing ourselves from the immediate images; it is especially the faculty of changing images. If there is not a changing of images, an unexpected union of images, there is no imagination, no imaginative action. If a present image does not recall an absent
one, if an occasional image does not give rise to a swarm of deviant
images, to an explosion of images, there is no imagination… The
fundamental work corresponding to imagination is not image but the imaginary. The value of an image is measured by the extent of its imaginary radiance or luminance. (Bachelard 1971/1988, p.19)

This paper documents the challenge of designing forms of new cultural
expression in the cultural imaginary using databases of cultural
materials. It also investigates new forms of scholarly access to
large-scale heterogeneous data. It describes this research as it has
been realized in the world’s first 360-degree interactive stereographic
interface, the Applied Visualization Interaction Environment (AVIE). The
four experimental projects included in this paper draw upon disciplines
such as multimedia analysis, visual analytics, interaction design,
embodied cognition, stereographics and immersive display systems,
computer graphics, semantics and intelligent search and, computational
linguistics. The research also investigates media histories,
recombinatory narrative, new media aesthetics, socialization and
presence in situated virtual environments, and the potential for a
psycho geography of data terrains. The datasets used in these four works
are:

1. Data Sculpture Museum: over 100,000 multimedia rich
heterogeneous museological collections covering arts and sciences
derived from the collections of Museum Victoria, Melbourne and ZKM
Centre for Art and Media, Karlsruhe. For general public use in a museum
contexts.

2. Rhizome of the Western Han: laser-scan archaeological
datasets from two tombs and archaeological collections of the Western
Han, Xian, China culminating in a metabrowser and interpretive cybermap.
For general public use in a museum contexts.

3. Blue Dots: Chinese Buddhist Canon, Koryo version
(Tripitaka Koreana) in classical Chinese, the largest single corpus with
52 million glyphs carved on 83,000 printing blocks in 13th century
Korea. The digitized Canon contains metadata that link to geospatial
positions, to contextual images of locations referenced in the text, and
to the original rubbings of the wooden blocks. Each character has been
abstracted to a ‘blue dot’ to enable rapid search and pattern
visualization. For scholarly use and interrogation.

4. Social Networks of Eminent Chinese Buddhists: in which the visualization is based on the Gaoseng zhuang corpus. For scholarly use and interrogation.

To contextualize these projects, this paper begins by briefly
introducing the rationale for the use of large-scale immersive display
systems for data visualization, together with a description of the AVIE
display system. Several related research demonstrators are described as
background for the aforementioned projects currently underway at ALiVE.
The paper also includes brief accounts of multimedia analysis, visual
analytics and text visualization as emerging and fast developing fields
applied to large-scale datasets and heterogeneous media formats, as
techniques applicable to cultural datasets.

2. Culture@Large

Advanced Visualization and Interaction Environment

Applied Visualization Interaction Environment (AVIE) is the UNSW
iCinema Research Centre’s landmark 360-degree stereoscopic interactive
visualization environment. The base configuration is a cylindrical
projection screen 4 meters high and 10 meters in diameter, a 12-channel
stereoscopic projection system and a 14.2 surround sound audio system.
AVIE’s immersive mixed reality capability articulates an embodied
interactive relationship between the viewers and the projected
information spaces (Figure 1). It uses active stereo projection solution
and camera tracking. (For a full technical description, see McGinity et
al. 2007).

Fig 1: Advanced Visualization and Interaction Environment (AVIE) © UNSW iCinema Research Centre. Image: ALiVE 2010Fig 1: Advanced Visualization and Interaction Environment (AVIE) © UNSW iCinema Research Centre. Image: ALiVE 2010

Embodied Experiences

The research projects under discussion in this paper are predicated
on the future use of real-time Internet delivered datasets in
large-scale virtual environments, facilitating new modalities for
interactive narrative and new forms of scholarship. The situated spaces
of immersive systems provide a counterpoint to the small-scale desktop
delivery and distributed consumption of internet-deployed cultural
content. New museology has laid the foundations for many of the museums
we see today as ‘zones of contact’, places of ‘civic seeing’ (Bennett
2006, pp. 263-281), and engagements with poly-narratives and dialogic
experience. The immersive display system AVIE encourages physical
proximity, allowing new narrative paradigms to emerge from interactivity
and physical relationships. Situated installations allow for
human-to-human collaborative engagements in the interrogation of
cultural material and mediations of virtual worlds. The physical
proximity of the participants has a significant influence on the
experiences of such installations (Kenderdine & Schettino 2010,
Kenderdine & Shaw 2009, Kenderdine et al. 2009).

For online activity, shared socialization of searching cultural data
such as social media involving non-specialist tagging of cultural
collection data (Chan 2007; Springer et al. 2008) is still largely
contained within a 2D flat computer screen. In the case of social
tagging, the impersonal, invisible characteristics of the medium deny
the inhibiting aspect of physical distance. Studies of human experiences
demonstrate that perception implies action, or rather interaction,
between subjects (Maturana & Varela 1980). Knowledge is a process
that is inherently interactive, communicative and social (Manghi 2004).
Elena Bonini, researcher in digital cultural heritage, describes the
process:

All art works or archaeological findings are not
only poetic, historical and social expressions of some peculiar human
contexts in time and space, but are also [the] world’s founding acts.
Every work of art represents an original, novel re-organization of the
worlds or a Weltanschauung… aesthetic experience creates worlds of
meaning (Bonini 2008, p.115).

Research into new modalities of visualizing data is essential for a
world producing and consuming digital data at unprecedented rates (Keim
et al., 2006; McCandless, 2010). Existing techniques for interaction
design in visual analytics rely upon visual metaphors developed more
than a decade ago (Keim et al. 2008), such as dynamic graphs, charts,
maps, and plots. Currently, interactive, immersive and collaborative
techniques to explore large-scale datasets lack adequate experimental
development essential to the construction of knowledge in analytic
discourse (Pike et al. 2009). Recent visualization research remains
constrained to 2D small-screen-based analysis and advances interactive
techniques of “clicking”, “dragging” and “rotating” (Lee et al. 2009,
Speer et al. 2010, p.9). Furthermore, the number of pixels available to
the user remains a critical limiting factor in human cognition of data
visualizations (Kasik et al., 2009). The increasing trend towards
research requiring ‘unlimited’ screen resolution has resulted in the
recent growth of gigapixel displays.
Visualization
systems for large-scale data sets are increasingly focused on
effectively representing their many levels of complexity. This includes
tiled displays such as HIPerSpace at Calit2 (http://vis.ucsd.edu/mediawiki/index.php/Research_Projects:_HIPerSpace)
and, next generation immersive virtual reality systems such as StarCAVE
(UC San Diego, De Fanti et al. 2009) and Allosphere at UC Santa Barbara
(
http://www.allosphere.ucsb.edu/).

In general, however, the opportunities
offered by interactive and 3D technologies for enhanced cognitive
exploration and interrogation of high dimensional data still need to be
realized within the domain of visual analytics for digital humanities
(Kenderdine, 2010). The projects described in this paper take on these
core challenges of visual analytics inside AVIE to provide powerful
modalities for an omni-directional (3D, 360-degree) exploration of
multiple heterogeneous datasets responding to the need for embodied
interaction; knowledge-based interfaces, collaboration, cognition and
perception (as identified in Pike et al., 2009). A framework for
‘enhanced human higher cognition’ (Green, Ribarsky & Fisher 2008) is
being developed that extends familiar perceptual models common in
visual analytics to facilitate the flow of human reasoning. Immersion in
three-dimensionality representing infinite data space is recognized as a
pre-requisite for higher consciousness, autopoesis (Maturana &
Varela, 1980) and promotes non-vertical and lateral thinking (see
Nechvatal, 2009). Thus, a combination of algorithmic and human
mixed-initiative interaction in an omni-spatial environment lies at the
core of the collaborative knowledge creation model proposed.

The four projects discussed also leverage the potential inherent in a
combination of ‘unlimited screen real-estate’, ultra-high stereoscopic
resolution and 360-degree immersion to resolve problems of data
occlusion and distribute the mass of data analysis in networked
sequences revealing patterns, hierarchies and interconnectedness. The
omni-directional interface prioritizes ‘users in the loop’ in an
egocentric model (Kasik, et al. 2009). The projects also expose what it
means to have embodied spherical (allocentric) relations to the
respective datasets. These hybrid approaches to data representation also
allow for the development of sonification strategies to help augment
the interpretation of the results. The tactility of data is enhanced in
3D and embodied spaces by attaching audio to its abstract visual
elements and has been well defined by researchers since Chion and others
(1994). Sonification reinforces spatial and temporal relationships
between data (e.g. the object’s location in 360-degrees/infinite 3D
space and its interactive behavior; for example, see West et al., 2008).
The multi-channel spatial array of the AVIE platform offers
opportunities for creating a real-time sonic engine designed
specifically to enhance cognitive and perceptual interaction, and
immersion in 3D. It also can play a significant role in narrative
coherence across the network of relationships evidenced in the datasets.

3. Techniques for Data Analysis and Visualization

Multimedia Analysis

This short section introduces the intersection of key disciplines
related to the projects in this paper. Multimedia analysis in the recent
past has generally focused on video, images, and, to some extent,
audio. An excellent review of the state of the art appeared in IEEE Computer Graphics and Applications
(Chinchor et al. 2010). Multimedia Information Retrieval is at the
heart of computer vision, and as early as the 1980s, image analysis
included things such as edge finding, boundary and curve detection,
region growing, shape identification, feature extraction and so on.
Content based information retrieval in the 1990s for images and video
became a prolific area of research, directed mainly at scholarly use. It
was with the mass growth of Internet based multimedia for general
public use that the research began to focus onf human-centric tools for
content analysis and retrieval (Chinchor et al. 2010, p.52). From the
mid-1990s, image and video analysis included colour, texture, shape and
spatial similarities. Video parsing topics include segmentation, object
motion analysis framing, and scene analysis. Video abstraction
techniques include skimming, key frame extraction, content-based
retrieval of clips, indexing and annotation (e.g. Aigrain et al. 1996).
Other researchers attempt to get semantic analysis from multimedia
content. Michel Lew and colleagues try to address the semantic gap by
“translating the easily computable low-level content-based media feature
to high level concepts or terms intuitive to users” (cited in Chinchor
et al. 2010, p.54). Other topics such as similarity matching,
aesthetics, security, and storytelling have been the focus of research
into web-based multimedia collections (Datta et al. 2008). Also, shot
boundary detection, face detection and content-based 3D shape retrieval
have been the focus of recent research (respectively Leinhart 2001; Yang
et al. 2002; Tangelder & Veltkamp 2004). A notable project for
multimedia analysis for the humanities is Carnegie Mellon University’s
Informedia project (Christel 2009 (www.informedia.cs.cmu.edu))
that uses speech, image and language processing to improve the
navigation of a video and audio corpora (The HistoryMakers
African-American oral history archive (http://www.idvl.org/thehistorymakers/)
& NIST TRECVIC broadcast news archive). However, it is generally
agreed that research in this area is application specific and robust,
and automatic solutions for the whole domain of media remain largely
undiscovered (Chinchor et al. 2010, p.55).

Visual analytics

In the last five years, the visual analytics field has grown
enormously, and its core challenges for the upcoming five years are
clearly articulated (Thomas & Kielman 2009; Pike et al. 2009).
Visual Analytics includes associated fields of Scientific Visualization,
Information Visualization and Knowledge Engineering, Data Management
and Data Mining, as well as Human Perception and Cognition. The research
agenda of Visual Analytics addresses the science of analytics
reasoning, visual representation and interaction techniques, data
representation and transformations, presentation, production and
dissemination (Thomas & Kielman 2009, p.309). Core areas of
development have included business intelligence, market analysis,
strategic controlling, security and risk management, health care and
biotechnology, automotive industry, environment and climate research, as
well as other disciplines of natural, social, and economic sciences.

In the humanities, Cultural Analytics as
developed by UC San Diego uses computer-based techniques for
quantitative analysis and interactive visualization employed in sciences
to analyze massive multi-modal cultural data sets on gigapixels screens
(Manovich 2009). The project draws upon cutting-edge
cyberinfrastructure and visualization research at Calit2.
Visual
analytics was referred to as one of the upcoming key technologies for
research with adoption of 4-5 years in the Horizon Report (2010). While
visual data analysis was not mentioned in the Museum edition of the
Horizon report (2010), I would argue that it represents techniques
fundamental to the re-representation of museological collections (online
and offline).

Text visualization

Research into new modalities of visualizing data is essential for a
world producing and consuming digital data (which is predominantly
textual data) at unprecedented scales (McCandless 2010; Keim 2006).
Computational linguistics is providing many of the analytics tools
required for the mining of digital texts (e.g. Speer et al. 2010; Thai
& Handschuh 2010) The first international workshop for intelligent
interface to text visualization only recently took place in Hong Kong,
2010 (Lui et al. 2010). Most previous work in text visualization focused
on one of two areas: visualizing repetitions, and visualizing
collocations. The former shows how frequently, and where, particular
words are repeated, and the latter describes the characteristics of the
linguistic “neighborhood” in which these words occur. Word clouds are a
popular visualization technique whereby words are shown in font sizes
corresponding to their frequencies in the document. It can also show
changes in frequencies of words through time (Havre et al. 2000) and in
different organizations (Collins et al. 2009), and emotions in different
geographical locations (Harris & Kamvar 2009). The significance of a
word also lies in the locations at which it occurs. Tools such as TextArc (Paley 2002), Blue Dots (Lancaster 2007, 2008a, 2008b) and Arc Diagrams
(Wattenberg 2002) visualize these “word clusters” but are constrained
by the small window size of a desktop monitor. In a concordance or
“keyword-in-context” search, the user supplies one or more query words,
and the search engine returns a list of sentence segments in which those
words occur. IBM’s Many Eyes (http://www.manyeyes.alphaworks.ibm.com)
displays the context with suffix trees, thereby visualizing the most
frequent n-grams following a particular word. In the digital humanities,
use of words and text strings is the typical mode of representation of
mass corpora. However, new modes of lexical visualization such as Visnomad (http://www.visnomad.org) are emerging as dynamic visualization tools for comparing one text with another. In another example, the Visualization of the Bible
by Chris Harrison, each of the 63,779 cross references found in the
Bible is depicted by a single arc whose color corresponds to the
distance between the two chapters (Harrison & Romhild 2008).

4. Cultural Data Sculpting

This section details prior work by the researcher partners dealing with large-scale video archives (iCinema Centre, T_Visionarium I & II 2003; 2008; Open City 2009) and the real-time delivery of Wikipedia pages and image retrieval from the Internet (ZKM, Crowdbrowing,
2008). All projects were delivered in large-scale museum-situated
panoramic or spherical projection systems. The paper then goes on to
describe the current works under production at ALiVE.

Previous work

T_Visionarium

T_Visionarium I was developed by iCinema Centre, UNSW in
2003. It takes place in the Jeffrey Shaw’s EVE dome (Figure 2), an
inflatable (12 meters by 9 meters). Upon entering the dome, viewers
place position-tracking devices on their heads. The projection system is
fixed on a motorized pan tilt apparatus mounted on a tripod. The
database used here was recorded during a week-long period from 80
satellite television channels across Europe. Each channel plays
simultaneously across the dome; however, the user directs or reveals any
particular channel at any one time. The matrix of ‘feeds’ is tagged
with different parameters – keywords such as phrases, color, pattern,
and ambience. Using a remote control, each viewer selects options from a
recombinatory search matrix. On selection of a parameter, the matrix
then extracts and distributes all the corresponding broadcast items of
that parameter over the entire projection surface of the dome. For
example, selecting the keyword “dialogue” causes all the broadcast data
to be reassembled according to this descriptor. By head turning, the
viewer changes the position of the projected image, and shifts from one
channel’s embodiment of the selected parameter to the next. In this way,
the viewer experiences a revealing synchronicity between all the
channels linked by the occurrence of keyword tagged images. All these
options become the recombinatory tableau in which the original data is
given new and emergent fields of meaning (Figure 3).

Fig 2: EVE © Jeffrey ShawFig 2: EVE © Jeffrey Shaw

Fig 3: T_Visionarium I © UNSW iCinema Research CentreFig 3: T_Visionarium I © UNSW iCinema Research Centre

T_Visionarium II (produced as part of the ARC Discovery,
‘Interactive Narrative as a Form of Recombinatory Search in the
Cinematic Transcription of Televisual Information’) uses 24 hours of
free-to-air broadcast TV footage from 7 Australian channels as its
source material. This footage was analyzed by software for changes of
camera angle, and at every change in a particular movie (whether it be a
dramatic film or a sitcom), a cut was made, resulting in a database of
24,000 clips of approx. 4 seconds each. Four researchers were employed
to hand tag each 4 second clip with somewhat idiosyncratic metadata
related to the images shown, including emotion; expression; physicality;
scene structure; with metatags including speed; gender; colour; and so
on. The result is 500 simultaneous video streams looping each 4 seconds,
and responsive to a user’s search (http://www.icinema.unsw.edu.au/projects/prj_tvis_II_2.html) (Figures 4 & 5).

Fig 4: T_Visionarium II in AVIE © UNSW iCinema Research CentreFig 4: T_Visionarium II in AVIE © UNSW iCinema Research Centre

Fig 5: Close of the dataspace, T_Visionarium II © UNSW iCinema Research CentreFig 5: Close of the dataspace, T_Visionarium II © UNSW iCinema Research Centre

An antecedent of the T_Visionarium projects can be found in Aby Warburg’s, Mnemosyne,
a visual cultural atlas, a means of studying the internal dynamics of
imagery at the level of its medium rather than its content, performing
image analysis through montage and recombination. T_Visionarium
can be framed by the concept of aesthetic transcription; that is, the
way new meaning can be produced is based on how content moves from one
expressive medium to another. The digital allows the transcription of
televisual data, decontextualising the original and reconstituting it
within a new artifact. As the archiving abilities of the digital allow
data to be changed from its original conception, new narrative
relationships are generated between the multitudes of clips, and
meaningful narrative events emerge because of viewer interaction in a
transnarrative experience where gesture is all-defining. The
segmentation of the video reveals something about the predominance of
close-ups, the lack of panoramic shots, the heavy reliance on dialogue
in TV footage. These aesthetic features come strikingly to the fore in
this hybrid environment. The spatial contiguity gives rise to news ways
of seeing, and of reconceptualising in a spatial montage (Bennett 2008).
In T_Visionarium the material screen no longer exists (Figure
6). The boundary of the cinematic frame has been violated, hinting at
the endless permutations that exist for the user. Nor does the user
enter a seamless unified space; rather, he or she is confronted with the
spectacle of hundreds of individual streams. Pannini’s picture
galleries also hint at this infinitely large and diverse collection,
marvels to be continued beyond the limits of the picture itself.

Fig 6: Datasphere, T_Visionarium II © UNSW iCinema Research CentreFig 6: Datasphere, T_Visionarium II © UNSW iCinema Research Centre

Open City

The T’Visionarium paradigm was also recently employed for OPEN CITY,
as part of the 4th International Architecture Biennale in Rotterdam
& the International Documentary Film Festival of Amsterdam, in 2009.
Via the program Scene-Detector, all 450 documentary films were
segmented into 15,000 clips. Categories and keywords here were related
in some way to the thematic of the Biennial and included tags such as
rich/poor, torn city, alternative lifestyles, religion, aerial views,
cityscapes, highrise, green, public/open, transport, slums. Also, time,
location, and cinematographic parameters such as riders, close-up, black
and white, color, day, night, were used to describe the footage (Figure
7).

Fig 7: Open City © UNSW iCinema Research Centre. Image: VPROFig 7: Open City © UNSW iCinema Research Centre. Image: VPRO

Crowdbrowsing

The interactive installation CloudBrowsing (2008-09) was one of the first works to be developed and shown in ZKM’s recently established PanoramaLab (Crowdbrowsing ZKM 2008b). The project lets users experience Web-based information retrieval in a new way:

Whereas our computer monitor only provides a
restricted frame, a small window through which we experience the
multilayered information landscape of the Net only partially and in a
rather linear mode, the installation turns browsing the Web into a
spatial experience: Search queries and results are not displayed as
text-based lists of links, but as a dynamic collage of sounds and
images. (Crowdbrowsing ZKM 2008b)

In the current version of the project (Figures 8, 9 & 10), the user browses the free online encyclopedia Wikipedia.
A filter mechanism ensures that only open content is displayed in the
installation. The cylindrical surface of the 306-degree PanoramaScreen
becomes a large-scale browser surrounding the user, who can thus
experience a panorama of his movements in the virtual information space.

Fig 8: Wikipedia pages, Crowdbrowsing © ZKMFig 8: Wikipedia pages, Crowdbrowsing © ZKM

Fig 9: Crowdbrowsing © ZKMFig 9: Crowdbrowsing © ZKM

Figure 10: Crowdbrowsing: interface control © ZKMFigure 10: Crowdbrowsing: interface control © ZKM

Current work

Data Sculpture Museum

This project is being developed as part of the Australian Research
Council Linkage Grant (2011 – 2014) for “The narrative reformulation of
multiple forms of databases using a recombinatory model of cinematic
interactivity” (UNSW iCinema Research Centre, Museum Victoria, ALiVE,
City University, ZKM Centre for Built Media). The aim of this research
is to investigate re-combinatory search, transcriptive narrative and
multimodal analytics for heterogeneous datasets through their
visualization in a 360° stereoscopic space (Del Favero et al. 2009).
Specifically, the exploration of re-combinatory search of cultural data
(as a cultural artefact) as an interrogative, manipulable and
transformative narrative, responsive to and exposing multiple narrations
that can be arranged and projected momentarily (Deleuze, 1989) over
that which is purposefully embedded and recorded in the architecture of
data archive and metadata, and witnessed (Ricoeur 2004). This project
builds upon the exploration and gains made in the development of T_Visionarium.

The datasets used include over 100,000 multimedia rich records
(including audio files, video files, high resolution monoscopic and
stereoscopic images, panoramic images/movies, and text files) from
Museum Victoria and the media art history database of the ZKM (http://on1.zkm.de/zkm/e/institute/mediathek/)
that include diverse subject areas from the arts and sciences
collections. The data are collated from collection management systems
and from web-based and exhibition-based projects. Additional metadata
and multimedia analysis will be used to allow for intelligent searching
across datasets. Annotation tools will provide users with the ability to
make their own pathways through the data terrain, a psycho geography of
the museum collections. Gesture-based interaction will allow users to
combine searches, using both image-based and text input methods. Search
parameters include:

  • Explicit (keyword search based on collections data and extra metadata tags added using the project),
  • Multimedia (e.g. show me all faces like this face; show me all videos on Australia, show me everything pink!),
  • Dynamic (e.g. show me the most popular search items; join my search
    to another co-user; record my search for others to see; add tags).

This project seeks understanding in the developments of media
aesthetics. Problems of meaningful use of information are related to the
way users integrate the outcomes of their navigational process into
coherent narrative forms. In contrast to the interactive screen based
approaches conventionally used by museums, this study examines the
exploratory strategies enacted by users in making sense of large-scale
databases when experienced immersively in a manner similar to that
experienced in real displays (Latour 1988). In particular, evaluation
studies will ask:

  1. How do museum users interact with an immersive 360-degree data
    browser that enables navigational and editorial choice in the
    re-composition of multi-layered digital information?
  2. Do the outcomes of choices that underpin editorial re-composition of
    data call upon aesthetic as well as conceptual processes and in what
    form are they expressed? (Del Favero et al. 2009)

The recent advent of large-scale immersive systems has significantly
altered the way information can be archived, accessed and sorted. There
is significant difference between museum 2D displays that bring
pre-recorded static data into the presence of the user, and immersive
systems that enable museum visitors to actively explore dynamic data in
real-time. The experimental study into the meaningful use of data
involves the development of an experimental browser capable of engaging
users by enveloping them in an immersive setting that delivers
information in a way that can be sorted, integrated and represented
interactively. Specifications of the proposed experimental data browser
include:

  • immersive 360-degree data browser presenting multi-layered and heterogeneous data
  • re-compositional system enabling the re-organization and authoring of data
  • scalable navigational systems incorporating Internet functions
  • collaborative exploration of data in a shared immersive space by multiple users
  • intelligent interactive system able to analyze and respond to users’ transactions.

Rhizome of the Western Han

This project investigates the integration of high resolution
archaeological laser scan and GIS data inside AVIE. This project
represents a process of archaeological recontextualization, bringing
together remote sensing data from the two tombs (M27 & The Bamboo
Garden) with laser scans of funerary objects, in a spatial context
(Figure 11, 12 & 13). This prototype builds an interactive narrative
based on spatial dynamics, and cultural aesthetics and philosophies
embedded in the archaeological remains. The study of Han Dynasties (206
BC-220 A.D.) imperial tombs has always been an important field of
Chinese archaeology. However, only a few tombs of the West Han Dynasty
have been scientifically surveyed and reconstructed. Further, the
project investigates a reformulation of narrative based on the
application of cyber mapping principles in archaeology (Forte 2010;
Kurillo et al. 2010).

Fig 11: Rhizome of the Western Han: inhabiting the tombs at 1:1 scale © ALiVE, CityUFig 11: Rhizome of the Western Han: inhabiting the tombs at 1:1 scale © ALiVE, CityU

Fig 12: Rhizome of the Western Han: iconographic hotspots © ALiVE, CityU  Fig 12: Rhizome of the Western Han: iconographic hotspots © ALiVE, CityU

Fig 13: Rhizome of the Western Han: image browser © ALiVE, CityUFig 13: Rhizome of the Western Han: image browser © ALiVE, CityU

There is ample discourse to situate the body at the forefront of
interpretive archaeology research as a space of phenomenological
encounter. Post-processual frameworks for interpretive archaeology
advance a phenomenological understanding of the experience of landscape.
In his book, Body and Image: Explorations in Landscape Phenomenology,
archaeologist Christopher Tilley, for example, usefully contrasts
iconographic approaches to the study of representation with those of
kinaesthetic enquiry. Tilley’s line of reasoning provides grounding for
the research into narrative agency in large-scale, immersive and
sensorial, cognitively provocative environments (Kenderdine, 2010). This
project examines a philosophical discussion of what it means to inhabit
archaeological data ‘at scale’ (1:1). It also re-situates the theatre
of archaeology in a fully immersive display system, as “the
(re)articulation of fragments of the past as real-time event” (Pearson
& Shanks 2001).

Blue Dots

This project integrates the Chinese Buddhist Canon, Koryo version
Tripitaka Koreana, into the AVIE system (a project between ALiVE, City
University Hong Kong and UC Berkeley). This version of the Buddhist
Canon is inscribed as UNESCO World Heritage enshrined in Haeinsa, Korea.
The 166,000 pages of rubbings from the wooden printing blocks
constitute the oldest complete set of the corpus in print format
(Figures 14 & 15). Divided into 1,514 individual texts, the version
has a complexity that is challenging since the texts represent
translations from Indic languages into Chinese over a 1000-year period
(2nd-11th centuries). This is the world’s largest single corpus
containing over 50 million glyphs, and it was digitized and encoded by
Prof Lew Lancaster and his team in a project that started in the 70s
(Lancaster 2007, 2008a, 2008b).

 Fig 14: Tripitaka Koreana © Korean Times (http://www.koreatimes.co.kr/www/news/art/2010/03/293_61805.html) Fig 14: Tripitaka Koreana © Korean Times (http://www.koreatimes.co.kr/www/news/art/2010/03/293_61805.html)

Fig 15: Tripitaka Koreana © Korean Times (http://www.koreatimes.co.kr/www/news/art/2010/03/293_61805.html)Fig 15: Tripitaka Koreana © Korean Times (http://www.koreatimes.co.kr/www/news/art/2010/03/293_61805.html)

The Blue Dots (http://ecai.org/textpatternanalysis/)
project undertaken at Berkeley as part of the Electronic Cultural Atlas
Initiative (ECAI) abstracted each glyph from the Canon into a blue dot,
and gave metadata to each of these Blue Dots, allowing vast searches to
take place in minutes rather than scholarly years. In the search
function, each blue dot also references an original plate photograph for
verification. The shape of these wooden plates gives the blue dot array
its form (Figure 16). As a searchable database, it exists in a
prototype form on the Internet (currently unavailable). Results are
displayed in a dimensional array where users can view and navigate
within the image. The image uses both the abstracted form of a ‘dot’ as
well as color to inform the user of the information being retrieved.
Each blue dot represents one glyph of the dataset. Alternate colors
indicate the position of search results. The use of color, form, and
dimension for fast understanding of the information is essential for
large data sets where thousands of occurrences of a target word/phrase
may be seen. Analysis across this vast text retrieves visual
representations of word strings, clustering of terms, automatic analysis
of ring construction, viewing results by time, creator, and place. The Blue Dots
method of visualization is a breakthrough for corpora visualization and
lies at the basis of the visualization strategies of abstraction
undertaken in this project. The application of an omni-spatial
distribution of this text solves problems of data occlusion, and
enhances network analysis techniques to reveal patterns, hierarchies and
interconnectedness (Figures 17 & 18). Using a hybrid approach to
data representation, audification strategies will be incorporated to
augment interaction coherence and interpretation. The data browser is
designed to function in two modes: the Corpus Analytics mode for text
only, and the Cultural Atlas mode that incorporates original texts,
contextual images and geospatial data. Search results can be saved and
annotated.

Fig 16: Blue Dots: abstraction of characters to dots and pattern arrays © ECAI, BerkeleyFig 16: Blue Dots: abstraction of characters to dots and pattern arrays © ECAI, Berkeley

Fig 17: Prof Lew Lancaster interrogates the prototype of Blue Dots AVIE © ALiVE, CityU. Image: Howie LanFig 17: Prof Lew Lancaster interrogates the prototype of Blue Dots AVIE © ALiVE, CityU. Image: Howie Lan

Fig 18: Close up of blue dots & corresponding texts, prototype of Blue Dots AVIE © ALiVE, CityU. Image: Howie LanFig 18: Close up of blue dots & corresponding texts, prototype of Blue Dots AVIE © ALiVE, CityU. Image: Howie Lan

The current search functionality ranges from visualizing word
distribution and frequency, to revealing other structural patterns such
as the chiastic structure and ring compositions. In the Blue Dots
AVIE version, the text is also visualized as a matrix of simplified
graphic elements representing each of the words. This will enable users
to identify new linguistic patterns and relationships within the matrix,
as well as access the words themselves and related contextual
materials. The search queries will be applied across multiple languages,
accessed collaboratively by researchers, extracted and saved for later
re-analysis.

The data provide an excellent resource for the study of dissemination
of documents over geographic and temporal spheres. Additional metadata,
such as present day images of the monasteries where the translation
took place, will be included in the data array. The data will also be
sonified. The project will design new omni-directional metaphors for
interrogation and the graphical representation of complex relationships
between these textual datasets to solve the significant challenges of
visualizing both abstract forms and close-up readings of this rich data.
In this way, we hope to set benchmarks in visual analytics, scholarly
analysis in the digital humanities, and the interpretation of classical
texts

Social Networks of Eminent Buddhists

This visualization of social networks of Chinese Buddhists is based
on the Gaoseng zhuang corpus produced at the Digital Archives Section of
Dharma Drum Buddhist College, Taiwan (http://buddhistinformatics.ddbc.edu.tw/biographies/socialnetworks/interface/).
The Gaoseng zhuan corpus contains the biographies of eminent Buddhists
between the 2nd and 17th century. The collections of hagio-biographies
of eminent Buddhist monks and nuns are one of the most interesting
sources for the study of Chinese Buddhism. These biographies offer a
fascinating glance into the lives of religious professionals in China
between c. 200 and 1600 CE. In contrast to similar genres in India or
Europe, the Chinese hagio-biographies do not, in the main, emphasize
legend, but, following Confucian models of biographical literature, are
replete with datable historical facts. The project uses TEI
to markup the four most important of these collections, which together
contain more than 1300 hagio-biographies. The markup identifies person
and place names as well as dates (http://buddhistinformatics.ddbc.edu.tw/biographies/gis/).

It further combines these into nexus points to describe events of the
form: one or more person(s) were at a certain time at a certain place
(Hung et al. 2009; Bingenheimer et al. 2009; Bingenheimer et al. 2011).

Fig 19: Social Networks of Eminent Buddhists JavaScript applet © Digital Archives, Dharma Drum Buddhist CollegeFig 19: Social Networks of Eminent Buddhists JavaScript applet © Digital Archives, Dharma Drum Buddhist College

On the Web, the social networks are displayed as spring loaded nodal
points (JavaScript applet; Figure 19). However, for immersive
architectures such as AVIE, there is no intuitive way to visualize such a
social network as a 2D relationship graph in the virtual 3D space
provided by the architecture itself. Therefore, this project proposes an
effective mapping that projects the 2D relationship graph into 3D
virtual space and provides an immersive visualization and interaction
system for intuitive visualizing and exploring of social network using
the immersive architecture AVIE. The basic concept is to map the center
of the graph (the current focusing person) to the center of the virtual
3D world and project the graph horizontally on the ground of the virtual
space. This mapping provides an immersive visualization such that the
more related nodes will be closer to the center of the virtual world
(where the user is standing and operating). However, this will put all
the nodes and connections at the same horizontal level, and therefore
they will obstruct each other. To resolve this, nodes have been raised
according to their distances from the world center; i.e. the graph is
mapped on to a paraboloid such that farther nodes will be at a higher
horizontal level. This mapping optimizes the usage of 3D space and gives
an intuitive image to the user – smaller and higher objects are farther
from the user. In general, there are no overlapped nodes, as nodes with
different distances from the world center will have different heights;
thus the user views them with different pitch angles. The connections
between nodes are also projected on to the paraboloid such that they
become curve segments; viewers can follow and trace the connections
easily since the nodes are already arranged with different distances and
horizontal levels (Figures 20 & 21).

Fig 20: Prototype of Social Networks of Eminent Buddhists, AVIE © ALiVE, CityUFig 20: Prototype of Social Networks of Eminent Buddhists, AVIE © ALiVE, CityU

Fig 21: Researcher using Prototype of Social Networks of Eminent Buddhists, AVIE © ALiVE, CityU  Fig 21: Researcher using Prototype of Social Networks of Eminent Buddhists, AVIE © ALiVE, CityU

To reduce the total number of nodes to be displayed and emphasize the
local relationship of the current focusing person, only the first two
levels of people related to the current focusing person are shown to the
user. Users can change their focus to another person by selecting the
new focusing person with the 3D pointer. The relationship graph updates
dynamically such that only the first two levels of relational are
displayed. Users can also check the biography of any displayed person by
holding the 3D pointer on the corresponding node. The biography of the
selected person will be displayed in a separated floating window. The
geospatial referencing and a dynamic GoogleEarth map are soon to be
implemented.

5. Conclusion

The projects described begin to take on core challenges of visual
analytics, multimedia analysis, text analysis and visualization inside
AVIE to provide powerful modalities for an omni-directional exploration
of museum collections, archaeological laser scan data and multiple
textual datasets. The research is responding to the need for embodied
interaction and knowledge-based interfaces that enhance collaboration,
cognition and perception, and narrative coherence. Through AVIE, museum
users and scholars are investigating the quality of narrative coherence
brought to interactive navigation and re-organization of information in
360-degree 3D space. There will be ongoing reporting related to the Data
Sculpture Museum, which has recently commenced as part of a three -ear
project, and the Blue Dots. Upcoming projects in AVIE include a
real-time visualization of the Israel Museum of Jerusalem Europeana
dataset (5000 records (http://www.europeana.eu))
that is looking for new ways to access museum collections with existing
(and constrained) metadata, and an interactive installation using laser
scan data from the UNESCO World Heritage site of the Dunhuang Caves,
Gobi Desert, China.

6. Acknowledgements:

The author would like to acknowledge the contribution of colleagues at ALiVE: Prof Jeffrey Shaw, William Wong and Dr Oscar Kin Chung Au.
Also the contributions of members of the Department of Chinese,
Translation and Linguistics, CityU, in relation to textual analytics,
Prof Jonathan Webster and Dr John Lee. The title of this paper, Cultural
Data Sculpting, is inspired Zhao & Vande Moere (2008).

Data Sculpture Museum: The narrative
reformulation of multiple forms of databases using a recombinatory model
of cinematic interactivity
. Partners: UNSW iCinema Research
Centre, Museum Victoria, ZKM, City University of Hong Kong. Researchers:
Assoc Prof Dr Dennis Del Favero, Prof Dr. Horace Ip, Mr Tim Hart, Assoc
Prof Dr Sarah Kenderdine, Prof Jeffrey Shaw, Prof Dr Peter Wiebel. This
project is funded by the Australian Research Council 2011-2014.

Rhizome of the Western Han.
Partners: ALiVE, City University of Hong Kong, UC Merced, Researchers:
Assoc Prof Dr Sarah Kenderdine, Prof Maurizio Forte, Carlo Camporesi,
Prof Jeffrey Shaw.

Blue Dots AVIE: Tripitaka
Koreana Partners: ALiVE, City University of Hong Kong, UC Berkeley,
Researchers: Assoc Prof Dr Sarah Kenderdine, Prof Lew Lancaster, Howie
Lan, Prof Jeffrey Shaw.

Social Networks of Eminent Buddhists. Partners: ALiVE, City University of Hong Kong, Dharma Drum Buddhist College. Researchers: Assoc Prof Dr Sarah Kenderdine, Dr Marcus Bingenheimer, Prof Jeffrey Shaw.

7. References

Allosphere, Available (http://www.allosphere.ucsb.edu/). Consulted Nov 30, 2010.

Applied Laboratory for Interactive Visualization and Embodiment – ALiVE, CityU, Hong Kong Available (http://www.cityu.edu.hk/alive). Consulted Nov 30, 2010.

Bachelard, G. (1998), Poetic imagination and reveries (L’invitation au voyage 1971), Colette Guadin (trans.), Connecticut: Spring Publications Inc.

Bennett, J. (2008), T_Visionarium: a Users Guide, University of New South Wales Press Ltd.

Bennett, T. (2006), ‘Civic seeing: museums and the organization of vision’, in S. MacDonald (ed.), Companion to museum studies, Oxford: Blackwell, pp. 263-81.

Bingenheimer, Marcus; Hung, Jen-Jou; Wiles,
Simon: “Markup meets GIS – Visualizing the ‘Biographies of Eminent
Buddhist Monks’” in Proceedings of the International Conference on Information Visualization 2009 (Published by the IEEE Computer Society). DOI: 10.1109/IV.2009.91

Bingenheimer, M., Hung, J-J., Wiles, S. (2011): “Social Network Visualization from TEI Data”, Literary and Linguistic Computing Vol. 26 (forthcoming).

Blue Dots (http://ecai.org/textpatternanalysis/). Consulted Nov 30, 2010.

Bonini, E. 2008, ‘Building virtual cultural heritage environments: the embodied mind at the core of the learning processes’, in International Journal of Digital Culture and Electronic Tourism, vol. 2, no. 2, pp. 113-25.

Chan, S. 2007, ‘Tagging and searching: Serendipity and museum collection databases’, in Trant, J. & Bearman, D. (eds), Museums and the Web 2007, proceedings, Toronto: Archives & Museum Information. Available online (http://www.archimuse.com/mw2007/papers/chan/chan.html), Consulted June 20 2009.

Chinchor, N, Thomas, J., Wong, P. Christel, M.
& Ribarsky, W. (2010), Multimedia Analysis + Visual Analytics =
Multimedia Analytics, September/October 2010, IEEE Computer Graphics, vol. 30 no. 5. pp. 52-60.

Chion, M., et al.. (1994), Audio-Vision, Columbia University Press.

Christel, M.G. (2009), Automated Metadata in Multimedia Information Systems: Creation, Refinement, Use in Surrogates, and Evaluation. San Rafael, CA: Morgan and Claypool Publishers

Collins, C. Carpendale, S. & Penn, G.
(2009), DocuBurst: Visualizing Document Content using Language
Structure. Computer Graphics Forum (Proceedings of Eurographics/IEEE-VGTC Symposium on Visualization (EuroVis ‘09)), 28(3): pp.1039-1046.

Crowdbrowsing, KZM (2008a) [video] (http://container.zkm.de/cloudbrowsing/Video.html). Consulted Nov 30, 2010.

Crowdbrowsing, KZM (2008b) (http://www02.zkm.de/you/index.php?option=com_content&view=article&id=59:cloudbrowsing&catid=35:werke&Itemid=82&lang=en). Consulted Nov 30, 2010.

Del Favero, D., Ip, H., Hart, T., Kenderdine,
S., Shaw, J., Weibel, P. (2009), Australian Research Council Linkage
Grant, “Narrative reformulation of museological data: the coherent
representation of information by users in interactive systems”. PROJECT
ID: LP100100466

DeFanti, T. A., et al.. (2009). The StarCAVE, a third-generation CAVE & virtual reality OptIPortal. Future Generation Computer Systems, 25(2), 169-178.

Deleuze, G. (1989) Cinema 2: the Time Image. Translated by Hugh Tomlinson and Robert Galeta, Minnesota: University of Minnesota.

Electronic Cultural Atlas Initiative (www.ecai.org). Consulted Nov 30, 2010.

Forte, M. (2010), Introduction to Cyberarcheology, in Forte, M (ed) Cyber Archaeology, British Archaeological Reports BAR S2177 2010.

Green, T. M., Ribarsky & Fisher (2009), Building and Applying a Human Cognition Model for Visual Analytics. Information Visualization, 8(1), pp.1-13.

Harris, J. & Kamvar, S. (2009), We feel fine. New York, NY: Scribner.

Harrison, C. & Romhild, C. (2008), The Visualization of the Bible. (http://www.chrisharrison.net/projects/bibleviz/index.html). Consulted Nov 30, 2010.

Havre, S., et al.. (2000), ThemeRiver: Visualizing Theme Changes over Time. Proc. IEEE Symposium on Information Visualization, pp.115-123.

HIPerSpace CALIT2. (2010), Research Projects: HIPerSpace. (http://vis.ucsd.edu/mediawiki/index.php/Research_Projects:_HIPerSpace). Consulted Nov 30, 2010.

Horizon Report, 2010, Four to Five Years: Visual Data Analysis. Available from (http://wp.nmc.org/horizon2010/chapters/visual-data-analysis/). Consulted Nov 30, 2010.

Horizon Report, Museum edition, (2010). Available from (www.nmc.org/pdf/2010-Horizon-Report-Museum.pdf.) Consulted Nov 30, 2010.

Hung, J,J., Bingenheimer, M. & Wiles, S.
(2010), Digital Texts and GIS: The interrogation and coordinated
visualization of Classical Chinese texts” in Proceedings of the International Conference on Computer Design and Applications ICCDA 2010.

Kasik, D. J., et al.. (2009), Data transformations & representations for computation & visualization. Information Visualization 8(4), pp.275-285.

Keim, D. A., et al.. (2006), Challenges in Visual Data Analysis. Proc. Information Visualization (IV 2006), pp.9-16. London: IEEE.

Keim, D. A., et al.. (2008), Visual Analytics: Definition, Process, & Challenges. Information Visualization: Human-Centered Issues and Perspectives, pp.154-175. Berlin, Heidelberg: Springer-Verlag.

Schettino, P. & Kenderdine, S. (2010),
‘PLACE-Hampi: interactive cinema and new narratives of inclusive
cultural experience’, International Conference on the Inclusive Museum,
Istanbul, June 2010, Inclusive Museums Journal (In press).

Kenderdine, S. & Shaw, J. (2009), ‘New
media insitu: The re-socialization of public space’, in Benayoun, M.
& Roussou, M. (eds), International Journal of Art and Technology, special issue on Immersive Virtual, Mixed, or Augmented Reality Art, vol 2, no.4, Geneva: Inderscience Publishers, pp.258 – 276.

Kenderdine, S. (2010), ‘Immersive visualization architectures and situated embodiments of culture and heritage’ Proceedings of IV10 – 14th International Conference on Information Visualisation, London, July 2010, IEEE, pp. 408-414.

Kenderdine, S., Shaw, J. & Kocsis, A.
(2009), ‘Dramaturgies of PLACE: Evaluation, embodiment and performance
in PLACE-Hampi’, DIMEA/ACE Conference (5th Advances in Computer Entertainment Technology Conference & 3rd Digital Interactive Media Entertainment and Arts Conference), Athens, November 2008, ACM. Vol 422, pp. 249-56.

Kurillo, G. Forte, M. Bajcsy, R. (2010),
Cyber-archaeology and Virtual Collaborative Environments, in Forte, M.
(ed) 2010, BAR S2177 2010: Cyber-Archaeology.

Lancaster, L. (2007), The First Koryo Printed Version of the Buddhist Canon: Its Place in Contemporary Research. Nanzen-ji Collection of Buddhist Scriptures and the History of the Tripitake Tradition in East Asia. Seoul: Tripitaka Koreana Institute.

Lancaster, L. (2008a), Buddhism & the New Technology: An Overview. Buddhism in the Digital Age: Electronic Cultural Atlas Initiative. Ho Chi Minh: Vietnam Buddhist U.

Lancaster, L. (2008b), Catalogues in the Electronic Era: CBETA and The Korean Buddhist Canon: A Descriptive Catalogue. Taipei: CBETA (electronic publication).

Lancaster, L. (2010), Pattern Recognition & Analysis in the Chinese Buddhist Canon: A Study of “Original Enlightenment”. Pacific World.

Latour, Bruno. (1988). Visualisation and
Social Reproduction. In G.Fyfe and J. Law (Eds.). Picturing Power:
Visual Depiction and Social Relations. London: Routledge. 15-38.

Liu, S., et al. (eds.) (2010), Proc. 1st Int. Workshop on Intelligent Visual Interfaces for Text Analysis, IUI’10.

Manovich, L. (2008), The Practice of Everyday
(Media) Life. In R. Frieling (Ed.), The Art of Participation: 1950 to
Now. London: Thames and Hudson.

Manovich, L. (2009), How to Follow Global Digital Cultures, or Cultural Analytics for Beginners. Deep Search: They Politics of Search beyond Google. Studienverlag (German version) and Transaction Publishers (English version).

Many Eyes. (http://www.manyeyes.alphaworks.ibm.com). Consulted Nov 30, 2010.

Maturana, H. & Varela, F. (1980), Autopoiesis and cognition: The realization of the living, vol. 42, Boston Studies in the Philosophy of Science, Dordrecht: D. Reidel.

McCandless, D. (2010.) The beauty of data visualization [Video file]. (http://www.ted.com/talks/lang/eng/david_mccandless_the_beauty_of_data_visualization.html). Consulted Nov 30, 2010.

McGinity, M., et al. (2007), AVIE: A Versatile Multi-User Stereo 360-Degree Interactive VR Theatre. The 34th Int. Conference on Computer Graphics & Interactive Techniques, SIGGRAPH 2007, 5-9 August 2007.

National Science Foundation, (2007), Cyberinfrastructure Vision for 21st Century Discovery. Washington: National Science Foundation.

Nechvatal, J. (2009), Towards an Immersive Intelligence: Essays on the Work of Art in the Age of Computer Technology and Virtual Reality (1993-2006) Edgewise Press, New York, NY.

Paley, B. 2002. TextArc (http://www.textarc.org). Consulted Nov 30, 2010.

Pearson, M. & Shanks, M. (2001) Theatre/Archaeology, London: Routledge.

Pike, W. A., et al. (2009), The science of interaction. Information Visualization, 8(4), pp.263-274.

Ricoeur, P. (2004), Memory, History, Forgetting, University of Chicago Press.

Speer, R., et al.. (2010), Visualizing Common Sense Connections with Luminoso. Proc. 1st Int. Workshop on Intelligent Visual Interfaces for Text Analysis (IUI’10), pp.9-12.

Springer, Michelle et al. (2008), For the Common Good: the Library of Congress Flickr Pilot Project. http://www.loc.gov/rr/print/flickr_report_final.pdf Consulted: 8/02/2009.

T_Visionarium (2003-2008), (http://www.icinema.unsw.edu.au/projects/prj_tvis_II_2.html). Consulted Nov 30, 2010.

Thai, V. & Handschuh, S. (2010), Visual Abstraction and Ordering in Faceted Browsing of Text Collections. Proc. 1st Int. Workshop on Intelligent Visual Interfaces for Text Analysis (IUI’10), pp.41-44.

Thomas, J. & Kielman, J. (2009), Challenges for visual analytics. Information Visualization, 8(4), pp.309-314.

Tilley, C. 2008, Body and image: Explorations in landscape phenomenology, Walnut Creek: Left Coast Press.

Visnomad. (www.visnomad.org). Consulted Nov 30, 2010

Visual Complexity. (www.visualcomplexity.com/). Consulted Nov 30, 2010

Wattenberg, M. (2002), Arc Diagrams: Visualizing Structure in Strings. Proc. IEEE Symposium on Information Visualization, pp.110-116. Boston, MA.

West, R., et al. (2009), Sensate abstraction: hybrid strategies for multi-dimensional data in expressive virtual reality contexts. Proc. 21st Annual SPIE Symposium on Electronic Imaging, vol 7238 (2009), 72380I-72380I-11.

ZKM Centre for Art and Media (http://on1.zkm.de/zkm/e/). Consulted Nov 30, 2010

Zhao J. and Vande Moere A. (2008), “Embodiment
in Data Sculpture: A Model of the Physical Visualization of
Information”, International Conference on Digital Interactive Media in
Entertainment and Arts (DIMEA’08), ACM International Conference
Proceeding Series Vol.349, Athens, Greece, pp.343-350.

Zielinski, S. (2006), Deep time of the media: Toward an archaeology of hearing and seeing by technical means, Custance, G. (trans.), Cambridge, MA: MIT Press.

Cite as:

Kenderdine, S., and T. Hart, Cultural Data
Sculpting: Omni-spatial Visualization for Large Scale Heterogeneous
Datasets. In J. Trant and D. Bearman (eds). Museums and the Web 2011: Proceedings.
Toronto: Archives & Museum Informatics. Published March 31, 2011.
Consulted
November 14, 2014.
http://conference.archimuse.com/mw2011/papers/cultural_data_sculpting_omnispatial_visualization_large_scale_heterogeneous_datasets

Posted March 16, 2011 - 8:07am by Sarah Kenderdine




Leave a Reply