Trip Report
1996 ACM Computer Graphics Conference
New Orleans, Louisiana
Commentary by:
	Michael L. Davis
My comments on my commentary [meta commentary] are enclosed in brackets [].
I attended the conference with Cornelia Stinchcomb (Neli).
	The conference took place in New Orleans from Sunday, August 4 through
	Friday, August 9 at the convention center. The convention center is a
	couple of blocks from the French Quarter and is next to the RiverWalk 
	shopping center complex along the Mississippi river. 
	There were about 30,000 attendees. The male to female ratio was about 
	4 to 1. We did not see anyone we knew there. We got there a few days 
	early [it was cheaper!] and got a chance to look around. We had a 
	conference passport which allowed us to attend any course, paper, panel,
	...  everything. I estimate that there were about 7,000 people who 
	attended the courses, and about 7,000 people who saw the papers and 
	panels [much more than I expected].
	Maybe 10-20% were programmers and the rest were some kind of graphics 
	artist, graphics hardware engineer, or graphics content developer. I 
	believe this skewing to be responsible for the lack of technical detail
	in the papers, panels and courses [however the proceedings still
	seem to be aimed towards techies]. Video cameras were allowed in the
	exhibition but not in the papers, panels or courses.
	The events to see were courses, papers, panels, applications, technical 
	sketches, animator sketches, applications(demos), digital bayou (demos), 
	art show, and the exhibition. Many of the events ran concurrently
	and it was often difficult to choose between from 2 to 4 [out of at least
	10 possibles] very interesting but simultaneously held events [and it 
	takes 10 minutes to walk from one side of the convention center to the 
	other]. However there was only one paper presented at a time until one 
	session during the very last day. Presenters now go to great lengths 
	to create videos which describe in great detail how and why their 
	software works. A lot of the papers presentations had animations about 
	how the code supporting the paper worked and what it did [very well
	done, too]. 
Notable Events
	The notable events at this SIGGRAPH were:
	* The Leroy/Hanrahan paper on LightFields
	* The number of papers from Microsoft Research (using SGI workstations
	  for the most part) vrs. those from Workstation companies
	* The amount of work being done on autonomous figures
	* The constructing of 3D models from photographs, 
	* The announcement of the Talisman video card specification/design.
	* Information visualization is a hot topic
	* More buzzwords: (synthetic video, synthetic audio)
	* Many companies were hiring: SGI, Dreamworks and Pixar (both had a 
	  booth dedicated to this purpose) and other, less well known 
	* It seemed to me that VRML and the WWW were taken for granted as
	  the present/future and did not generate much excitement.
	* Java was mentioned but it was not really ubiquitous like VRML and
	  the WWW.
	* I learned alot about humans: about the fovia, about the uh-huh
	  factor, about the brains internal multimedia cache buffers.
	* Creating autonomous characters is/will be a new craft/art-form.
	* Coolest T-shirt (that I can remember): An SGI shirt that said:
	  "Attitude Inside". Notable mention: "Will 3D Model for Food".
	* All of the course notes, proceedings and even ordinary books had 
	  their pages curled by the humidity.
	* The Digital Bayou was a number of connecting rooms filled with demos.
	  The rooms had very little lighting and there was fish-netting strung
	  everywhere and it had kind of an eerie feel to it.
	* There was a Startup Park on the exhibition floor that had a number of
	  desks with computers on them with one or two people giving demos. A
	  number of these startups were showing VRML modeling tools.
	* There were walkways above the convention halls in which the exhibition
	  was going on and it was fun to look down and see it in it's glory all
	  at once.
The Hotel Situation
	We (Neli and I) stayed in 4 different hotels (Sheraton, Le Meridian, 
	La Quinta, Hilton Riverside). Because of the lack of hotel rooms for a 
	large conference of 30,000 and because we sent our registration in on 
	the last day we were unable to book our stay in a any other way. The 
	Sheraton was the best of the lot but we only stayed there one day. La 
	Quinta was located in Slidell, a city 30 minutes away from the 
	convention center, which was where the SIGGRAPH housing booked us and 
	which was where we would had stayed without Neli's [Thank you Neli!] 
	massive efforts to get us into local hotels.
The French Quarter
	There are lots of French-speaking people. It is expensive to eat
	Very fancy antique shops [anyone for a nice marble-topped parlor table 
	for $40,000?]. Lots of chandeliers. The French Antique shop has very
	nice fireplace mantle pieces [from $12,000 to $45,000; if you think you
	have too much money, go there, you will no longer]. The best shop was 
	M.S.Rau Antiques (800.544.9440) that had a large selection of mechanical 
	music boxes [probably more than 20, but again: $12,000 to $40,000]
	and a very large architects table [~8ftx16ft, lots of inlay work, 
	probably _not_ overpriced at $95,000]. There are probably about 25-30 
	nice stores in a 4 block radius. There were a few art galleries but 
	nothing special.
	Bourbon street was a spectacle. Approximately a 12 block semi-cordoned
	off street full of galking tourists lined with bars, night clubs and
	strip joints. Each place BLASTS music as loud as possible and places a 
	person outside to ring people in and at places along the street you can 
	hear 3 bands equally well and it blends into one large noise. There were 
	approximately 2 jazz, 1 country, 2 blues 2 funk and maybe 10 fusion 
	bands. This activity was present only on Bourbon street and starts about 
	8:00pm, really starts going about 10:00pm and we did not stay later to 
	see what happens [courses and papers started at 8:15. Yeah, we are wimps
	and probably should have hung out to see the madness].
	The French Quarter and the convention center is in the big city. It is 
	dirty and smelly. Garbage is piled out on the sidewalks, water is 
	dripping off of the buildings onto the sidewalks at random, it smells 
	like pee and vomit and cigarettes. There are often unsavory characters 
	[who make Boulder CO's transients look healthy and sober] just hanging 
	out staring at people. Alcohol is available everywhere at any time 
	[even inside the conference!].
	[I find myself attracted (galleries, music, looser attitude towards
	sexuality) and repulsed (dirty, dangerous, too much drinking) at the 
	same time].
The Food Situation
	Fish. Lots of it. But I don't eat them so we found 1 Indian, 1 
	vegetarian and 1 Chinese restaurant (with one vegetarian dish) in the 
	French Quarter. We later found a couple more Chinese and one Indian 
	restaurant north of the Quarter but the neighborhood was somewhat more 
Sequence of Events:
	Registration opens at 7:00pm. We get there about 7:20. It is packed. 
	We get out of there around 9:00 or so. We each get about 5-6 course 
	notes, the proceedings, visual proceedings, CDROMs, t-shirt, watch, 
	coffee cup and a nerf-frisbee. Spent the rest of the evening trying 
	to figure out what to see and do the next day [this takes awhile!]. 
	This was to remain the pattern for each evening. Planning much more 
	ahead than the next day is just too overwhelming.
	Courses start at 1:30pm. We both attend the Information Visualization
	A day full of courses. The course reception in the evening (8-10pm).
	Last day of courses. The exhibition opens at 10:00am. Electronic
	theater from 6-8pm.
	The keynote speech. Then the papers finally start. The papers and
	panels reception from 8-10pm at the Mariot.
	Exhibition closes at 3:00. We rent a car for our one night stay in 
	Last day. Mostly papers to see. Start on our 8.5 hour trip home.
	Arrive home at 2:30 am. Sam the cat is quite hungry. So is Neli.
	Marc Levoy received the Computer Graphics Achievement Award for his
	Volume Rendering work (volumes rendered directly from sampled data
	without first creating an intermediate surface representation).
	Douglas Adams (Author of "Hitchhikers Guide to the Galaxy", et. al.)
	presented the keynote speech. The keynote speech:
	[Pacing up and down - springing from one topic to another].
	Now working for Digital Village doing a game called Starship Titanic.
	In reference to the new artistic mediums springing up: Each medium has 
	it's own grain: an artist need tools that allow them to work with the 
	particular quality of the grain.
	In reference to the way one has to work around performance and
	functionality problems: Make the most of the limitations ... while we 
	still have them.
	Starship Titanic mixes the Myst and Doom fly-through techniques: one
	half of the ship in one way, through the human's eyes, and the other
	half is through the computer's eyes. [i.e. Myst scenes are beautifully 
	rendered ahead of time, Dooms scenes are not as beautiful but are 
	rendered in real-time allowing the user to move anywhere in the scene].
	Radio has the best renderer, computers are next: want direct mind to
	art connection.
	Talked about a new project to put a movie version of Hitchhiker's 
	Galaxy on IMAX, which will be a sequence of 40 minute files/chapters. 
	Looking for interested parties (i.e. funding, worker bees). email
	Called for people to support Apple; they did good things in the past, 
	made some silly and some [awfully stupid - I forget the British term he
	used...] mistakes, but do we want the _other guy_ to completely take 
	Evolution of what a computer is:
		* Calculator (VisiCalc)
		* Typewriter (WordStar)
		* TV, with a typewriter in front of it (Games)
		* Brochure (WWW)
	Computers model all of these tools. They are essentially an extension of
	the human ability to model things.
	Called for people to bug their hotel managers about getting, not phone 
	dataports, but ethernet connections to the hotel intranet which is 
	connected to the internet. "We want more bandwidth!" ala "I want
	my MTV!".
	Talked about a survey which revealed how few people knew what made the
	seasons of the year. So we need a bumper sticker: "It's the axial tilt,
	stupid", or for Texans: "It's the axial tilt, stoopid". Said what was
	needed was a visualization tool that modeled the planets. That these
	modelers should in general:
		present information->accept information/questions->present...
	i.e. almost infinitely customizable through their user interface.
	Reality -> filter (eyes, ears, ...) -> filter (mind's model of the 
	world) -> us. 
	There are blank spots in our vision where the eye's optic nerve 
	attaches to the retina. 
	Eye's move in spurts: these spurts can be anticipated? Eyes focus on
	one very narrow range of view (using the fovia). A computer could
	possibly generate information on a screen only in the small area 
	where the user is actually looking. This screen would be 
	indecipherable to anyone else if the computer placed non-related info 
	in other areas of the screen.
	The computer extends our senses: Takes data that is invisible and makes
	it visible. For example: when designing a road that winds through some
	hills, one needs to minimize cost, windyness, and hillyness. One could
	devise a system that provided a joystick with haptic feedback that 
	provided a force against any movement of the user's hand in proportion 
	to an increase in cost, windyness or hillyness. Then the user could 
	just 'feel' their way to a solution.
	"Human evolution is at an end". This occurred when we invented tools. 
		* When move to somewhere colder, you don't wait for your genes 
		  to evolve.
		* We adapt the environment to suit ourselves
		* If you are stupid you don't die (you're put in high office)
		* Extended our physical reach
		* World is now more complex than our monkey brains can 
		* Putting together facts that we do know, we created a machine
		  to extend the mental domain, our mental 'reach'.
		* Ability to model things, try them out, simulate, create them.
	Machines will not be alive but a 'skin', a new self-organizing 
	environment or eco-system in which we live:
		Was: nature->us
		Will be: nature->machines->us
	[Some people think this new eco-system will appear to be much like raw 
	nature where humankind spent most of the last multiple millions of 
	years; living in a tamed (ala nanotechnology, ubiquitous computers) but
	natural environment.
	[Course notes are available on CDROM and some specific course notes
	in hard copy form were purchased by Neli and I].
COURSE #5: Design and Application of OO 3D Graphics and Visualization Systems
	This course is about the implementation of various existing 3D graphics
	systems and their advantages and disadvantages.
	[I did not see this, unfortunately, but picked up and read the course
	notes the next day, as this is the stuff I'm _really_ interested in
	and actually relates to what I do].
Object-Oriented Visualization - Bill Lorensen, General Electric
	Examples of Systems
		* AVS
		* Iris Explorer
		* Data Explorer
	Examples of Toolkits
		* The Visualization Toolkit (VTK) (Available at your local
		* Inventor
		* AVS Express
		* LYMB (GE's toolkit)
Design Issues - Ken Martin, General Electric
		* Data Management (Data flow)
		* Graphics Abstractions (Abstract interfaces)
		* Object Design Issues (Class hierarchies)
System Architecture I - Will Schroeder, General Electric
		* Systems
			- Inventor
			- VTK
		* Models
			- Object model
			- Execution model
			- Data model
		* Strengths and weaknesses
					Performs well with OpenGL
					Flexible, powerful suite of objects
					Database file format widely used
					State-traversal violates encapsulation
					OpenGL specific
					No data visualization support
					Renderer independent
					Flexible, powerful objects suite
						(including visualization)
					Interpreter (Tcl)
					Purer OO model
					Abstract models impact performance
					In-memory data problem for large 
			Similar object models
			Diff execution methods (scene-graph traversal vrs. 
				data-flow execution)
			Inventor richer graphics system
			VTK richer visualization system
			VTK interpreter striking difference
AVS/Express: O-O Perspectives - Jean M. Favre, Swiss Center for 
Scientific Computing
		* A visual programming environment
		* An interpreted language for Rapid Prototyping
		* An Object Manager
		* A GUI builder
		* Development tools to integrate user code
		* Applications
		* Kits
			User Interface
			Data Visualization
			Image Processing
			Graphics Display
			Annotation and Graphing
			AVS5 Compatibility
		+ Connections between parameters of one object and another
		+ Each object manages it's own execution
		+ C, C++ API
	[More detailed information follows in the course notes].
	Jean M. Favre: presents some class hierarchies for data visualization
	Thomas D. Citriniti, Rensselaer Polytechnic Institute 
		Presents the object-oriented nature of scene graphs for
		VRML and of the structure of VTK.
	The course finishes with papers on VISAGE, VTK and a visualization
	architecture by Favre.
COURSE #8: Information Visualization
	This course covered many, many types of graphs and visualization
	techniques, especially those relevant to data on the WWW (as opposed
	to visualizing scientific data).
	Information visualization consists of applying the methods of
	scientific visualization to data besides scientific. See also the 
	Information Visualization '95 conference proceedings [see the IEEE Web
	site ( Neli picked up a copy at the 
ChairPerson Nahum Gershon, The MITRE Corp.
	Most work consists of the following sequence:
	InfoSphere->filtering->few documents->making sense of the
	few documents->[human makes a decision?]->agents->work accomplished.
Stephen G. Eick, Bell Labs
	His company's position is to extract info from large databases in 
	order to gain competitive advantage. Discussed basic problems with 
	network graphs: when one puts unrelated nodes far apart then the 
	long lines that connect them take up lots of real estate which implys 
	that their relationship IS very important, etc.
	* Multidimensional scaling layout: move unrelated things apart
	* Node and spring layout: move related things closer together
	Problems with large network graphs is their growth is as n^2 along
	one axis and as 'n' along the other axis.
Stewart Card - Xerox PARC
	Displayed a fisheye view for documents: a 2D array of pages of a 
	document where a whole page was visible in front and partial, angled 
	views of other pages were at the sides, tops and corners.
	Webbook: packaged URL nodes into a book. One can quickly 'ruffle' 
	through pages in this book. The turning of pages is animated and looks
	realistic. Searches on this book return another book with the results. 
	The rest of the screen had an perspective view of a bookshelf on the 
	left and some other books were positioned off in the distance.
	Showed a calendar interface that always showed the day's schedule, the 
	week, the month, the year, and the decade simultaneously in one window. 
	It animates the change of focus from one 'view' to another. The current 
	view is always largest.
	Talked alot about rating interfaces using a [hueristically-arrived-at?]
	COKCF (Cost of knowledge characteristic function).
	Talked about how the eye works: there is a small area, right behind
	the cornea on the retina, called the 'fovea', that is for very high
	resolution vision, with 200 times more information for the same area.
	This implies that humans look at one, relatively small spot at a time,
	the rest being peripheral and that then implies that user interfaces
	should be the designed to take this into account.
	i.e. "Focus with Content" - fisheye views, desktops, ...
	Exploting the Dual Perception System to manage:
		* Attention
		* Geting more info on display
		* Where to explore next
		* Building the whole picture out of pieces
	The Human Dual Perception System is:
					(Focused, concentrated)
	---------------------		-------------------------
	Peripheral			Fovial
	Parallel			Serial		
	High capacity			Low capacity
	Can't inhibit			Can inhibit
	Fast				Slow
	Load independent		Load dependent
	Unaware of processing		Aware of processing
	Target pops out			Target does not pop out
	--------------------		-------------------------
	--------------------		-------------------------
	Spatial position		Text
	Color				Numbers
	There are features that humans will always process first, then second, 
	... Example: [in the sample picture] we will ALWAYS look at the large
	orange cooler, then the orange drink, then the very large face, in that 
	Number of items		Terminators	Direction of motion
	Line orientation	Intersection	Binocular luster
	Length			Closure		Stereoscopic depth
	Width			Color		3D depth cues
	Size			Intensity	Lighting direction
	Curvature		Flicker
	A chart is also included which shows how well each of these attributes
	are suited to displaying quantitative, ordinal, and nominal quantities
	(i.e. how well color represents a quantity like fuel efficiency of a
	The graphical query that can be handled in a single, minimum unit
	of time is: 2 plane variables, 1 retinal variable (X, Y, color).
	4 or fewer items can be counted almost instantly.
		Visual image store: 	200 [70 - 1000] msec
					17 [7 - 17] letters
		Auditory image store:	1500 [900 - 3500] msec
					5 [4.4 - 6.2] letters [notes?]
		Cognition:		70 [25 - 170] msec
		Motor:			70 [30 - 100] msec
		Perceive(see/hear):	100 [50 - 200] msec
	This implies that ~1/10 second is an important timemark: actions that
	take longer than this will be perceived as 'taking time' by the user
	(and will upset their 'stride', or 'stream of consciousness').
	Animation transition times (between scenes) should be about 1 second
	(the "uh-huh protocol" frequency - the time that, in a conversation,
	if one speaker pauses the other is supposed to say 'uh-huh' in order
	to 'Ack'knowledge that they have heard and understood).
COURSE #10: Procedural Modeling and Animation Techniques
	This subject usually covers modeling fire, clouds, water, marbled
	surfaces and animating figures.
	The first presentation was made by I don't know who and the subject
	matter covered does not appear in the course notes. But it was a very 
	interesting talk and presentation.
	A MAC program was displayed which had a simple rendering of a girls 
	face and a number of sliders (9) to the left and a number of columns 
	of buttons below. The sliders on the left controlled various the 
	emotions that the face can display. These comprise the dimensionality 
	of the expression of the face and are:
	* Brow
	* Blink
	* Eyes in (i.e. left/right)
	* Eyes up
	* Lower lids (to portray suspicion)
	* Sneer
	* Smile
	* Ooh
	* Aah
	The above expressions often move more than one muscle at a time. For
	example the smile moves both the jaw and lips.
	The buttons below the animated face were hard to see but some are:
	Emotion		Lips
	* angry		* relaxed
	* happy		* puckered
	The interface allowed zooming in and out on everything and when zooming 
	in on a button the button (which is to the left of the button label) 
	turns into a little graph that can be directly manipulated with the 
	mouse. For example, he zoomed in on the 'pucker' button, changed the 
	bell shaped curve to be steep and the girl did a quick kiss, made it 
	wider and she her kiss lasted for a few seconds. Set it to be (almost) 
	random and she appeared to be talking (actually used a noise function 
	mapped to ~8 variables of the face using 'correct' statistics to make 
	it look right [I got the impression that it is easy to make it look 
	wrong and kinda disquieting).
	This talk (and many, many others) was targeted at the graphics animators 
	in the audience, indicating that they were likely to be using this kind 
	of software soon. Whereas keyframe animation is:
		still image -> interpolated images -> still image
	with the frames in-between automatically generated by the software
	procedural animation is:
		image -> layered procedures -> image
	with what is happening between specified procedurally (for example:
	'walk', 'smile' and 'wave goodbye to the baby for 3 seconds').
Proceedural Models of Geometric Detail
John C. Hart, Washing State University
	Proceedural geometric instancing:
	* Uses 'hooks' in the scene specification
	* Similar to an L-System but storage problem alleviated by on-demand 
	  evaluation performed at time of instantiation
	* Similar to shading languages (Renderman) (shade trees Cook[1984]).
	Talked about reusing massive numbers of instances in order to save 
	calculation and storage time (O(log n) instead of O(n)). For example 
	he showed a whole lawn made from one blade of grass. Used cyclic, 
	recursive instancing (i.e. 4 blades form a group, 4 groups form a 
	bigGroup, 4 bigGroups ...)
	Future: The instances:
	* Need to know where they are in world coordinates when evaluating 
	  their procedure
	* Need to know how big they are in device coordinates in order to 
	  support LOD (level of detail) restrictions
	* Need to be able to calculate their bounds without having to execute 
	  all the procedures (i.e. be redrawn).
	* Need to be able to calculate their bounds when moving (animating) 
	  or when have behaviors.
Proceedural Models of Natural Phenomena
Ken Musgrave, George Washington University
	Talked about "How to Build a Planet". Topics to cover:
		* Terrain
		* Atmosphere
		* Texture
		* Rendering with LOD
		* Practicality (want to cruise planet at game-playing speed
	Talked alot about how procedural models which do not model reality
	but look good are "ontogenetic" models and 'OKAY' and about how 
	modeling consists of mapping simple concepts to complex domains and 
	about how he worked for several years with Mandelbrot. But the talk 
	and paper do present a VERY detailed analysis of how to model just 
	about every detail of nature.
	Used studies by DeCartes on rainbows (how there is darkness between
	the bands, the angle of the arc, ...). About how, for the human eye,
	at great distances there is loss of contrast and reds become blue-ish
	and greens -> red-ish [I think that is how it works. It has to to with
	Raleigh scattering]. Send him email to get this presentation in 
	postscript at: He is co-author of the popular
	"Texturing and Modeling" book by Ebert et. al. He has joined a computer
	game company to render planets in real time. Says Kajiya (Microsoft)
	told him this would be possible on a PC in about 18 months.
Proceedural Modeling with Artificial Evolution
Karl Sims, MIT
	What he does is create:
	* Proceedures for navigating procedures
	* Evolve 2D pictures
		* Evolve/mutate lisp equations ("mod x(exp((abs y)*46))"->)
		* Process input images
		* Process 3D Objects
		* Genetic morphs of 1 equation to another
		* Video of genetic morphs of one object to another was presented
		  in the popular video: "Primordal Dance"
	* Evolving Creatures
		* Uses recursive limit procedures (cyclic instance calls with
		  an upper limit on the number of recursions)
		* These procedures can be displayed pictorially using techniques
		  presented in '94 Siggraph paper, pgp. 15-22)
		* Also:
			* Tweek node parms probabilistically
			* Make new nodes at random
			* Tweek connection parms probabilistically
			* Add or remove connections probabilistically
			* Garbage collect (remove) unused nodes (that are not
			  being used by the creature)
		* Creatures are evolved (they get to live if they do well) to
		  perform some task like walking or swimming
		* Humorously the creatures find bugs in the system and take
		  advantage of them.
			* Found a way to push on themselves to move (i.e.
			  getting free energy)
			* Used potential energy at start to 'fall down' and
			  somersault to success.
		* Creature's genes were composed of primitive elements: +, -,
		  sin, cos, blur, warp, noise, ... [All of this is '94 paper].
		* Creatures are a graph of nodes (software controllers) and 
		  blocks (3D body parts).
		* Neurons: composed of oscillators, intergators, differentiators
		  wave generators, ...
	Question: Did you limit the complexity somehow (i.e. even on the
	Connection machine super parallel computer which this was done on
	it would eventually grind to a halt). 
	Answer: Yes. When evolving the 2D images, used methods to predict
	the approximate amount of compute time (i.e. used the length and
	complexity of the lisp equations). And when evolving creatures
	the number of blocks was limited.
	[See also Course #36, which contains reprints of papers on 2D
	picture evolution (Siggraph '91) and on evolving 3D creatures
	(Artifical Life Proceedings '94)]
COURSE #31: Practical 3D User Interface Design
Practical 3D User Interface Design
Dan Robbins
	The lecture should have been titled: Process of the User Interface 
	Design for Microsoft 3D Movie Maker.
	I did not see this paper but read the notes (slides). It is very
	CHI-like and interesting from a user-interface designers perspective.
	Also brings up problems unique to 3D.
	All of the notes for this course are very interesting from a CHI
	perspective, especially for those who think the 3D wave is just
	beginning [like me].
COURSE #36: Artificial Life for Graphics, Multimedia and Virtual Reality
Artificial Plants
Przemyslaw Prusinkiewicz, University of Calgary
	[This guy is "Mr. L-System"]. An addition to the L-system 'language'
	has been made to add an evaluation function, ?, that can be used to 
	embed decisions in the language (e.g. "?P(x, y)", make a decision as 
	a function of current position). This new evaluation function technique
	has now been used to:
		* Add automatic budding and pruning capability (i.e. to confine 
		  plants to cubes or other, more fanciful, 3D volumes, like 
		  seashells and helixs)
		* Add the affects of insects
		* Make plants flower from the top down (during animations)
		  which is what they do in nature
	Their system architecture is of form:
		Plant				Environment
	======================		=========================
		Internal Process		Internal Process
		Response ---------------------->Reception
	The L-System function: "?E(...)" is used to communicate with the
	For example: How much light does this leaf get?
		* Lots	=>	Add new branches and new leaves
		* Some	=>	Add new leaf
		* Little=>	Drop leaf
		* None	=>	Drop branch
		* Model whole ecosystems
		* Plant evolution
	See for free code that 
	lets a person play with some of this.
	Questions: [He is very knowledgeable about his field]
	Q. How is the problem of branches running into branches handled?
	A. 	* It has a low probability so not much of a problem
		* It will run out of light and go away
		* It is a curse in flower petals (some pictures of roses
		  were manually 'fixed').
		* Ignored for now
		* Eventually the environment will let us know there is an
	Q. Can this be used to model the propagation of memes? The growth of
	   derivatives in financial markets? ...
	A. L-Systems are mostly used for geometrical domains, for ex: they have 
	   been used to model sea-shells and molecules.
Artificial Evolution for Graphics and Animation
Karl Sims
	[Similar to a previous course. I hit the exhibition]
Behavioral Animation and Evolution of Behavior
Craig Reynolds
	[Did not get back in time to see this. Talked about his 'Flocks, Herds
	and Schools].
Artifical Animals
Demetri Terzopoulos, University of Toronto
	This was about modeling fish. This is a very exhaustive research, model, 
	simulation and animation of fish. The architecture looks like:
	Brain->send message "swim left"->motor controllers->muscle contractions
	* Habits (are inherited)
	* Current Mental state:
		* Hunger
		* Libido
		* Fear
	Using these tools an animator will not be a puppeteer but an photographer
	(e.g. the system does the work of making them move). A very, very 
	complete model. For example:
		* It models the fovia of the fishes eyes (i.e. the fish will 
		  move their eyes (and therefore body and swimming direction) to
		  put prey directly in front).
		* Fish learn how to swim
		* Fish eat food and other fish
		* Fish swim to/from things
		* Fish can learn (i.e. hit a beach ball with fin by jumping
		  out of water)
		* Fish learn to swim by learning the periodic oscillating motion 
		  that real fish use.
Artifical Humans in Virtual Worlds
Daniel Thalmann, Swiss Federal Institute of Technology, Switzerland
	Modeling Humans: Software is basically:
		For each Actor
			Get state based on scene
		For each Actor
			Execute action based on state
	Uses inverse kinetics that takes into account the center of mass. Can 
	make autonomous virtual humans:
		* Follow a cube through a maze.
		* Play tennis with a real human
		* Play tennis with another autonomous virtual human
		* Walk around in park talking to each-other
		* Referee a tennis game
		* Can move around in (video of) real room, casting shadows and
		  interacting with 3D obstacles
	By being autonomous it will be easy for the "average programmer" to
Interactive Autonomous Agents
Pattie Maes, MIT
	Pattie Maes was unable to attend and instead a co-worker, Bruce M.
	Blumberg,, who works with her and wrote the 
	software gave the talk instead. Both worked on the ALIVE system.
	Using behavior for models makes sense, models:
		* Know what they are doing
		* Know what they are feeling
		* etc.
	In contrast, Toy Story was modeled by artists rendering each frame
	using 3D models. 'Woody' had:
		* 700 degrees of freedom
		* His face had 200 degrees of freedom
		* His Mouth had 50 degrees of freedom
	(i.e. the software that they wrote for Woody had 700 parameters to mess 
	around with). Very laborious. 
	Solution: want things with behavior
		* For animations
		* For VR
		* For science/knowledge
	They wrote and use the Hamsterdamm toolkit. One of the key insights
	was that entities should be persistent when trying to achieve a goal,
	but only for the right amount of time, then they should give up and
	try something else. [Presented a paper at last years SIGGRAPH:
	"Arbitration between motivations"]. Also added a boredom factor to help
	in this matter. Used alot of the work done in Ethology - the behavior 
	of animals [though, all of this seemed relevant to humans as well]. 
	Talked alot about implementation; for example, to determine whether 
	two behaviors could occur at the same time a table was created:
	Motor Skills	|	Degrees of Freedom
	Walk		|o|o|o|o| | | | | | | | | | | | |
	Sit		|o|o|o|o| |o|o|o| | | | | | | | |
	Look at		| | | | | | | | | | |o|o|o| | | |
	Wag tail	| | | | | | | | | | | | | | | |o|
	This table is for: Silas T. Dog, one of their virtual test subjects.
		* Smart avatars avoid walls
		* Animals that learn from other animals
Modeling and Rendering Architecture from Photographs
	This paper presented a technique by which photographs of a building are 
	used to 1) generate a 3D model of the building and 2) texture map it. 
	I.E. from a number of photographs (12 is a goodly number) of a building, 
	one can use this technique to generate a computer model that allows one 
	to 'fly' around the building to view it at any angle. Useful for 
	historical buildings which no longer exist, or to create a 3D building
	from Monet paintings.
View Morphing
	This paper presented a technique by which one can morph between 3D 
	shapes. The example was of two Mona Lisa paintings, one of which was 
	the reflection of the original. The morphing process showed Mona 
	moving her head from left to right, which looked better than the 
	similar 2D morphing process.
Light Field Rendering - Marc Levoy, Pat Hanrahan  Standford
	[This was the important paper of the conference, at least with respect
	to changing a fundamental graphics paradigm].
	This paper presented a approach by which one thinks of 'images' not as 
	light bouncing off of 3D modeled objects but as a 5D wave function of 
	light. I.E. at any point in space the light coming in to that point 
	completely describes what can be seen and this light can be described
	as a color along a ray at a position (x, y, z) and and direction (theta, 
	phi). The paper describes a technique by which images are represented 
	as 2D slices of a (simplified) 4D function. Using this method, inward 
	or outward facing video clips can be used to recreate objects and 
	environments that can be smoothly flown through (this is similar to 
	Apple's QuickTime VR but allows for smooth transitions between frames). 
	The look-from point or look-at point must be fixed.
		* Light fields as primitives in graphics libraries.
		* Re-thinking of light in graphics in general
		* Planer arrays of cameras would allow individual viewers
		  of TV to move viewpoint around without really moving the 
	Demo: runs on Windows 95
The Lumigraph - Steven J. Gortler et. al.  Microsoft Research
	This was very similar to the last paper. Even the reduction to a 4D 
	function and parameterization of this function was similar.  The 
	difference seems to be that this system recreates the objects as a 
	texture map on a volume reconstruction and then generates the image 
	from these objects in a traditional manner whereas the Levoy paper goes 
	right from light field to image. This paper also presented a number of 
	techniques for interpolating the images between different views.
	There were a number of papers on reducing the polygon count of various 
	3D models. They all try to preserve image quality to within a specified 
	range. One paper reduced a human face to ~3 planes and then 
	texture-mapped that.
	There were quite a few papers on articulated figure motion and how to 
	optimize the calculations. The one from Microsoft talked about 
	semi-automatic transitions between movements of the figure (i.e. the
	transition between standing up and walking) and about a 
	motion-expression DAG (directed acyclic graph) language. One of the 
	more interesting papers [to me] follows:
Limit Cycle Control and its application to the Animation of Balancing
and Walking - Joseph Laszlo et. al. University of Toronto
	This paper presented a technique by which one can represent the 
	periodic motion present in various movements as a loop and then apply 
	perturbations to this for effects or to maintain the steady-state 
	nature of the loop in the event of disturbances from the environment. 
	The authors are trying for a general solution using these cyclic 
	motions through the state space of some object. A biped has two 
	control variables and 2 limit cycles. Future: using motion capture 
	data to generate the limit cycles and/or to perturbate the data.
SKETCH: An interface for Sketching 3D Scenes - Robert C. Zeleznik, et. al.
Brown University
	This paper presented a 2D gestural language which specifies 
	various 3D shapes, constraints and layouts. [Neli liked this
	paper alot. I missed it.]
Disney's Aladdin: First steps toward Storytelling in Virtual Reality
Randy Pausch et. al. University of Virginia
	This paper presented user data about a VR system installed at the 
	Annaheim Disneyland. There were 45,000 flyers (of the magic carpet) 
	questioned. The questionaire revealed:
	* People rarely turn their head, looking straight ahead (like TV). 
	  They can be taught however, to look from side-to-side, and like
	  it when they do.
	* Motion sickness did not seem to be a problem.
	* Men and women liked the game about the same amount.
	* 1st experience reduces IQ in half (the experience is overwhelming,
	  this could be a side-effect of the newness of the of the technology 
	  and wear off as people get used to it).
	* About 1,000,000 people watched on monitors but it had to be 
	  experienced for people to 'get it'.
	* Animated actors need to react to the user in order to appear 'real'.
	* Used hand-painted textures for the scenes (so some special techniques 
	  had to be used in order to allow the 4? reality engines to load all 
	  of those texture maps when the user entered a new scene).
	* The artists who painted the textures made them kind of dark and 
	  forbidding but when the computer graphics people added more light
	  sources the artists said "if I wanted a light there I would have drawn
	  a light there".
IMPROV: A System for Scripting Interactive Actors in Virtual Worlds
Ken Perlin et. al. Media Research Laboratory, New York University
	[See also course notes].
	This paper presented the architecture and an animation language to 
	assign behavior and actions to articulated figures.
The Virtual Cinematographer: A Paradigm for Automatic Real-Time
Camera Control and Directing - Li-wei He,  Microsoft Research
	This paper presented a technique by which camera shots can be 
	automatically placed and controlled purely by software. Film is a 
	collection of scenes which are a collection of shots. This system
	encodes a number of rules-of-thumb for these camera shots and 
	sequences of camera shots and moving camera shots. Each user has own 
	Virtual Cinematographer to which they can give hints about what they 
	want to see. Future: automatically seek out interesting backgrounds 
	(behind the actors) for the shots.
Comic Chat - David Kurlander,  Microsoft Research
	This paper presented a technique by which comic book-like panels are 
	generated automatically for online-communications. The user may choose 
	one of a number of avatars(characters) to use (if they are using Comic 
	Chat) or are assigned one at random. The avatars are positioned 
	correctly in each panel by the system. The user may choose one of a 
	number of emotion types and strengths from by using an 'emotion wheel'. 
	When the user type emoticons (:-)), 'I', 'you' 'IMHO', etc... the 
	system automatically causes your avatar to smile, point to itself,
	point to the other, point to itself, ... respectively. The title page 
	has a list of main characters. The comic book moves to the next panel 
	when the panel is filled with characters and balloons. Jim Woodring was 
	the consultant cartoonist and his was the style is used for the 
	backgrounds and characters. The presenter talked about modifying the 
	background to, say, the map of the state of Ohio when you say you are 
	from Cleveland but was concerned about bandwidth issues. Can direct your
	words to one or more participants. The layout is done locally (wispers 
	can be private, i18n issues handled locally, ...). Narration and 
	thought balloons are supported. Texutal codes are prepended to the 
	beginning of the usual IRC text protocol. Future: More comic styles, 
	more comic elements, apply it to interactive fiction, ... 
	Demo at:
Talisman: Commodity Realtime 3D Graphics for the PC
Jay Torborg, James T. Kajiya - Microsoft
	This paper presented a design for a graphics card that is geared 
	towards quickly rendering 3D game-like graphics. It does this by: 
	1) Relying on spatial and temporal coherence (i.e. most of the time
	   the scenes don't change much from one frame to the next and so don't 
	   have to be completely re-rendered).
	2) While the video buffer (composite layers) are being scanned at video 
	   rates (~75 Hz) affine transformations are applyed to the objects 
	   in real-time as they are read to alter them to compensate for any 
	   changes since the last frame
	3) Textures and objects are stored in the video buffers in compressed 
	   form in order to reduce the bandwidth requirements of the rendering 
	   pipeline (the compressed data can be compressed and decompressed in 
	"Performance rivaling high-end 3D graphics workstations can be achieved 
	at a cost point of two to three hundred dollars".
	The problem: The SGI Reality Engine 2's graphics pipeline as a bandwidth 
	of ~10,000KB/sec. A PC graphics card has a bandwidth of ~200KB/sec. As 
	the paper states: "SGI has nothing to fear from evolving PC 3D 
	accelerators, which utilize traditional 3D pipelines, for some time to 
	This architecture requires that objects go on different layers in the 
	video buffer if they want different Z values. It is unclear how to 
	specify this Z value or when a single objects span Z values. It is up 
	to the application to keep track of what is on what layer (MS says it 
	takes only about 50% of the CPU power of a Pentium 90 to depth sort the 
	3D database required) and that this will be only about 10% of the CPU 
	power of the host machine by the time the board ships. DirectDraw is 
	used to interface to this card. They are working on a 3D graphics 
	database API [Direct3D's Retained Mode? What about Template Graphics's 
	OpenGL Inventor they are shipping with Visual C++ 4.2?]. Admitted to 
	some visual problems with visual 'snapping' when objects are re-rendered 
	after a number of frames of affine transformations. They have been 
	working on this with hardware video card manufacturers for about a year. 
	The programmer specifies which buffer an object goes into in DirectX by
	using 'begin scene' and 'end scene' methods.
	[It is my belief that if a program conforms to the requirements of this 
	card that it will indeed get high-performance graphics. But at what cost 
	in flexibility of content and kludginess of code?].
Visual Models of Plants Interacting with Their Environment
Radomir Mech and Przemyslaw Presinkiewicz, University of Calgary
	[See course notes].
Flow and Changes in Appearance
Julie Dorsey, et. al., MIT
	This paper presented a method by which objects can be 'aged' by 
	simulating the flow of particles over objects as if they had been rained 
	upon for many years. An extensive analysis is made of what kind of 
	affects the flow of water causes in various kinds of materials. Examples 
	were presented of the algorithms affect on a brick wall and on some 
Superior Augmented Reality Registration by Integrating Landmark Tracking and 
Magnetic Tracking
Andrei State et. al. , University of North Carolina at Chapel Hill
	[I did not see this presentation or the related Technical Sketch, but I
	bet it was cool].
Image-Guided Streamline Placement
Greg Turk, et. al. University of North Carolina at Chapel Hill
	This paper presented techniques concerned with drawing and placement of 
	streamlines (used for the visualization of 2D vector fields). Starting 
	with tufts (or streamlets) which are short arrows in the direction of 
	the field located on a rectangular grid. Then optimize by perturbing 
	the position of the tufts to improve score where the score is calculated 
	to optimize the even distribution of the gray level of all stream lines.
	Problems with tufts:
		* No fine detail
		* Endpoints and arrows interrupt flow.
	Add optimization pass number 2 which adds operations (move, add/delete, 
	shorten/lengthen, combine) and using random descent to further optimizes
	the layout. Display magnitude of vectors by using spacing, width, 
	intensity, color, opacity. A multiresolution streamline was presented 
	which had very think lines for gross detail and fine lines for fine 
	details (must zoom in to see these well).
		* Use for name placement in maps.
		* Animation of vector fields.
Scale-Dependent Reproduction of Pen-and-Ink Illustrations
Mike Salisbury, Davis H. Salesin, et. al., University of Washington
	This paper describes a technique by which high-fidelity pen-and-ink 
	illustrations can be rendered at any scale or resolution. This process 
	adds half-tones images by using pen-and-ink strokes. Discontinuities
	in original image are saved and used during scale to preserve image 
Rendering Parametric Surfaces in Pen and Ink
Georges Winkenbach, Davis H. Salesin, University of Washington
	This paper describes a technique to do what the paper title indicates.
	This and the previous paper are refinements of techniques presented
	over the last few years.
Painterly Rendering for Animation
Barbera J. Meier, Walt Disney Feature Animation
	This paper describes a technique that makes use of previous techniques
	that render images using paint-brush-like strokes and adds the ability
	to animate the objects (pan and zoom around them) without the strokes
	moving around ("getting the paint to stick to surfaces rather than
	randomly change with each frame").
	[The last paper, we did not get to see it, but got to read it (and many
	others) while waiting for the various planes we used to get home].
	Many of the panels were just position speeches by people about 
	technology that is already well known to me (at least to the depth that 
	they would be able to present it in a 20 minute speech). So many panels 
	were a bust and I had to leave and find something else to learn. 7 were 
	like this. 9 I did not get to see. The following panels were the ones I 
VRML: Prelude and Future
	Don Brutzman - moderator.
	Gavin Bell - previously of SGI: VRML to follow same evolution as TV. 
	Expect a few years of flying logos. Also said there is a Cosmo player 
	that runs on a laptop?
	Salim AbiEzzi - Microsoft: talked about ActiveVRML, whish is now called
	ActiveAnimation []. Built-in support for
	audio, sprites.[This language is based on ML with extensions for time
	as an implicit type].
	Andy van Dam: The VAG VRML process worked because:
		* Limited objectives
		* Great effort
		* Used already existing Inventor metafile format.
		* Media integration (2D, audio, ...)
		* Networking, distributed capabilities
		* Database, world management, database updates
		* Behavioral modeling (which Andy thinks is more than just
		  sending messages which is what Don suggests).
	VRML is a "modern display-list system". We need tools; we're not going
	to write in VRML. Need clip behaviors, clip models, clip art, ...
	Mark Pesce: Read a story about a recent experience of his about getting 
	lost in the woods and having to sleep in the wild overnight. His 
	conclusion that those that think they know everything, can't, and
	so they should "sit down and shut up". [How this was related to the
	story is not clear to me].
	Discussion: Mixing of ActiveAnimation and VRML. VRML explorer for 
	Internet Explorer is written in Java. Microsoft is doing Java3D too. 
	Andy thinks that sending vertices to a rendering pipeline is going to 
	be replaced/augmented with lightfield techniques [so how does this 
	relate to VRML? New VRML primitives to specify the light-field 
Graphics PCs will put Workstation Graphics in the Smithsonian
Pro: Michael Cox - S3, Jay Taborg - Microsoft
Con: Michael Deering - Sun, Kurt Akeley, VP of R&D, SGI
	This was a fun panel. I missed Michael Cox's position speech.
	Michael Deering: Workstations:
		* Graphics are accurate
		* Reduce time-to-market
		* People are comparing 2 years in the future of the PC to what 
		  WS had 3 years ago.
	Jay Taborg: PC's:
		* Games will drive 3D graphics on PC to excel
		* 50,000,000 people buy a game each year
		* 1.3 Million Autocads licenses says CAD can be done on PCs
	Kurt Akeley: WS's
		* PCs are a commodity product
		* WS are a value added product
		* PCs are last with the latest technology
		* Both use same technology but different business models
		* PCs are ubiquitous, WS's lead
	I did not keep track of who said what in the following, but it should
	be obvious whether they were in the affirmative or not.
		* Research is moving to PCs (12 papers vrs 4)
		* PCs are for games, WS for technology
		* Game graphics may so diverge from the ability to do CAD
		  graphics that WS may indeed be required.
		* Q. How much of the market will PCs have next year? A. The
		  same amount that Java workstations will have the year after 
		* People buy based on price-performance (price includes cost
		  of crashes, training, throughput, reliability,,,).
		* Q. What platform has 30% more market share than Windows?
		  A. Doom <laughs>
		* Java OS will run on wrist watches. Windows runs on Jay's
		  watch right now. Q. What time is it? <laughs>
		* When will SGI have compiler upgrades at PC prices? <laughs>
		  Kurt: A. Don't know. Michael(@Sun): A. When you buy a Sun 
		* Windows: Plug-and-Pray
		* WS have integrated hardware and software development instead 
		  of separate companies competing against each other.
		* Michael(@Sun): by 2020 graphics hardware will be able to 
		  completely saturate the human visual system. [What does this 
		  mean? But I can't wait to see what kind of graphics power it
		  will let me play around with].
	Questions ran for about an hour (15 minutes after they were supposed to
	be over). Someone did ask how many people in the audience used WSs. 
	About 1/2 the people had already left but of those left about 70% 
	raised their hands.
Springing into the Fifth Decade of Computer Graphics - 
Where We've been and Where We're Going
	[The OldTimer's Club]
	Carl Machover: Graphics is now a 50 Billion dollar business
	Fred Brooks: In the future we will render pixels, not polygons. How 
	long will we need a polygon as an approximating primitive? The 
	PixelFlow machine renders 20 million polygons/sec => 1-10 
	pixels/polygon. If know depth of each pixel in a scene then one can 
	now where everything will be when the viewpoint changes (except for 
	occluded objects, animated objects, ...). In future we will take much 
	more advantage of Plenoptic rendering: 
		* 20 frames/sec when _viewpoint_ changes
		* < 20 frames/sec when _model_ changes (with some exceptions)
		* Don't have to re-render the whole scene every frame.
	Ed Catmull: Historically:
		* Realism was first goal for graphics. Good or bad this got
		us to where we are today. They were really unsatisfied with 
		the graphics looked like in those days. People should still 
		be unhappy with what we can accomplish today - much more work 
		needs to be done.
		* ARPA - government funding drove alot of graphics
		* Art - There is a difference between the ability to 'see'
		versus the ability to 'draw'. Art schools don't teach people
		so much how to draw as how to see what is present. People
		should be cognizant of what they can 'see' when creating.
	Syvan "Chase" Chasen: Lockheed had 'directed' research that had to
		accomplish something in a few years.
	Bob Dunn: [produced the CORE standard]: Graphics is 3 things:
		* Display + Presentation
		* Visualization/Objects/Behaviors
		* Access and Management of Data
		This then implies:
		* Platform will subsume more technologies (more functionality
		  will come with the hardware and software platforms)
		* Graphs for management will be very important
			* Animated 3D analysis (info visualization) over very
			  LARGE terrabyte datasets.
		* 3D animated gossip, shopping, travel
		* 3D animated for CAD (animate failures, and other things 
		  besides a perfectly working system).
	Originally graphics people did:
		1. Charts, then were
		2. CAD operators, now
		3. ?
	GUIs will get much more sophisticated.
	Bert Herzog: Time to Maturity: Visionaries say 2 years, Users say 20-30
	years. This time includes development, test and production. Examples:
		SketchPad: 1961
		CAD done: 1985
		Visualization: 1960 (traffic flow)
		[Visualization done: AVS?]
		Animated films: 1965,66
		Toy Story: 1995
	Andy van Dam: UIs should be:
		* full-duplex (not ping-ponging control back and forth between 
		  user and computer). 
		* Computer should use/read facial expressions
		* Use both hands for input devices
		* Be communal, cooperative
		* Use agents to do things we are not good at
	VRML will be the next graphics standard and will be a BIG thing.
Electronic Theatre:
	The electronic theater was held in a very old, very ornate theater. 
	(The Saenger theater). Accoustics and the sound system were excellent 
	(and loud). Many of the clips were commercials and outtakes of movies. 
	There was only one abstract art clip. The highlights were clips from 
	the movie 'Joe's Apartment' and the Halloween '95 Simpson's episode 
	(Homer gets trapped in the 3D universe and there are many 'in' jokes 
	for us computer types [Homer says: 'Why do I feel like just standing
	here is costing millions of dollars per second?']. I also liked
	the clip describing Fibinocci and the golden mean.
	The theater was within walking distance of the French Quarter so we
	and many others walked back and invaded the quarters restaurants.
	The exhibition this year was about 10% book publishers, 60% 3D 
	multimedia production software, 20 hardware vendors, 10% video
	production et. al. The booths most attended were SGI and SoftImage.
	Microsoft, Intel, and IBM were noticeably customer free. MetaTools
	had a counter where they were selling software and they had a
	quite a bit of business. Sun was noticeably (to us) not promoting
	their own stuff (Java workshop was not present, sales people told
	us to fill out a questionaire when we asked about upgrade policies).
	[Check their Web-site for upgrade information which was posted July
	Missing: PCs [_I_ didn't see any, just SGIs and one HP], PC video
	card venders (3 years ago there were LOTS). It could be that because 
	most graphics artists and programmers use workstations that the PC 
	crowds are writing off SIGGRAPH.
	I got a demonstration of MET++, a multimedia framework built on top of
	ET++. I talked to Philipp Ackermann, the author of the book on MET++,
	[Developing Object-Oriented Multimedia Software, Philipp Ackermann,
	(c) 1996, []] and it appears that 4 
	people got together 3 years ago and have been writing a great deal of 
	software. They have very many demo applications running on this 
	framework (3D modeler, WWW browser, a visual programming language, 
	...). I have yet to dump the code from the book's CD and see what it 
	all looks like but the book, the framework and demo are very, very 
Course Reception:
	We missed the parade of SIGGRAPHers down Canal street to the warf. 
	Supposedly the street was to be blocked off for 15 minutes while this 
	was going on. We were distracted on a (one of many) food finding 
	The reception was on-board a ship that did not leave dock and could 
	only hold about a half of the SIGGRAPH crowd at one time so there was 
	a very large crowd waiting to get on. We had just had a nice Italian 
	dinner so we just hung out on board watching the Mississippi for short
	while then left.
Paper and Panels Reception:
	At the Mariot. [There was what appeared to be vegetarian food].
Interactive Modelling of Branching Structures
Lintermann, University of Karlsruhe, Germany
Deussen, University of Magdeburg, Germany
	[p. 148, Visual Proceedings]
	Uses a visual programming language to specify what kind and the shape
	and appearance of the plants that are generated. It is pretty much a
	dataflow graph with components of 'high functionality' (i.e. it only
	needs 5 - 8 icons for the samples given in the write-up).
	The software is available as shareware at:
The Virtual Lego Village
Paul Mlyniec, SmartScene Engineering, MultiGen Inc.
	[p. 88, Visual Proceedings]
	This product was being demoed at a prime location in the SGI booth on
	the exhibition floor, as well. It is a multi-player VR world where 
	people can jointly build things with Lego parts. Parts can be stretched, 
	snapped together, and painted with colors and textures. [It only appears
	to run on FAST graphics hardware].
Web Doc Browser
	A visual display of a 2D cloud of points representing relationships
	between documents on the WWW. Specifying a keyword/subject will 
	highlight the location in the cloud of points which represents the
	keyword. Can zoom in on topics. Points are colored (which represents
	some other variable like document size?). 
	Implementation: Assigns positive and negative forces between the 
	points to allow the points to position themselves. A force is 
	calculated by calculating what keywords are and are not present in
	a particular document and then assigning positive force towards
	documents with similar keywords and negative forces away from 
	documents that have keywords that are 'known' to be opposite. Uses
	a full text search of documents to ascertain keywords. The presenter
	will not answer pointed questions about the weighting function used.
Visualization of the Web
	The WWW is a collection of documents and (binary) relations. Noise
	(information that obstructs information gathering) increases at a
	faster rate than the information itself.
	To address this tools have been created that display a visualization 
	of the WWW. 
	Tools that visualize the link-structure
	* Harmony
		3D rectangles in a graph that lies on a 'ground' plane 
		stretching to the horizon.
	* Narcissus
		Spheres (sites) in 3D space linked together and having small
		spheres (web pages) surrounding them.
	Tools that visualize the semantic relationships
	* Navigational view builder
		A triangle, each vertex of which represents a subject and 
		inside of which is a cloud of points representing documents, 
		points being closer to a vertex if they are more related to 
		the subject that it represents.
	The tool presented uses document-keyword lists, calculates a value of 
	similarity, and displays an associate network. Uses a sparse display
	for global navigation of the space/WWW and a dense display for local
	navigation (i.e. a LOD methodology). This 3D network is laid out
	using a spring metaphor. 
	Can select an area in the graph and create a visual bookmark that 
	appears as an icon below the display window and the icon's pictorial
	representation is of the area that the bookmark represents. Also the
	bookmark looks like a directional sign post in the display area. 
	Clicking on the bookmark icon causes the display to pan smoothly to 
	the bookmark location.
	* Makes another network graph representing the query.
	* Can also display the network of keywords of the documents.
	* Can leave signpost to record where one has been. [I may have gotten
	the above signpost = bookmark description wrong].
		* Use this to navigate online documentation (and help files).
		* Also add the display of the Web's link structure.
	This is at:
The Bridge:
	This was the name of the area where the Siggraph 'Art' show was 
	being held. Did not spend more than a few minutes here on my 
	(circuitous) way to the restroom.
	Los Angeles, August 3 - 8, 1997 -