1995 ACM Computer-Human Interface Conference 
SIGCHI 95 
	- Commentary by Mike Davis. My comments on my commentary 
	[meta commentary] are enclosed in brackets [].
	The conference took place in Denver from Sunday, May 7 through
	Thursday, May 11 at the convention center. The convention center is
	a couple of blocks from the 16th street mall and is relatively small 
	(don't ever expect SIGGRAPH to be held there). I commuted by car to 
	and from the conference each day with Neli from Boulder and only 
	went to the conference proper (no tutorials) which was from Tuesday 
	through Thursday. There were about 3000 people in attendance. There
	were approximately an equal number of men and woman in attendance. 
	Neli Stinchcomb was there as well as David Lesserman, and Russ
	(from Capri in Boulder).
	The conference proceedings came in 2 volumes totaling about 1000 
	pages. The optimal situation would have been to read the whole 
	proceedings before the conference so I would know what I really 
	wanted to see... but even just scimming that much information 
	would be difficult in the time available.
	The events to see were papers, panels, short papers, demos and
	design sessions. There was also an exhibition, videos, posters
	and interactive experience. Many of the events ran concurrently
	and it was often difficult to choose between from 2 to 4
	very interesting but simultaneously held events.
Exhibition:
	The exhibition this year was about 70% book publishers, 20% GUI
	consulting companies and 10% commercial products. There weren't
	any GUI builders there this year (there were quite a few in
	'92).
Posters:
	The coolest poster was by a student from CU. It is a visual 
	programming language similar to Brigham Bells ChemTrains visual 
	language (also from CU). Essentially before and after pictures 
	are drawn by the programmer and the system extrapolates motion 
	and constraints from these pictures. This is also similar to 
	KidSim and some L-System editors.
	The implementation of this tool was unique both for it's design
	as well as it's slowness. It uses a grid (like Life) and the
	objects have pre-defined ranges of motion. This tool would then
	examine the WHOLE sample space (ALL possible positions of the
	all objects given their range of motion) and then score how well
	each space corresponds to the rule set. The one with the highest
	score becomes the next frame of the animated behavior of the 
	objects. This design could still be implemented to execute
	reasonably quickly so I don't know what really is making it
	so slow.
	
Videos:
	Did not see these. They can be ordered for about $50 from ACM.
	I have heard that it contains videos of visualization widgets
	like treemaps, cone-trees, magnifiers (magic lenses), etc.
Interactive Experience:
	The only memorable 'experience' was watching David Lesserman
	put his finger in a thimble which was on the end of a mechanical-
	arm-like mechanical assembly. This thimble-machine is capable
	of giving haptic feedback and so moving this thimble around one
	can FEEL 3D objects and walls. In one demo one of the 3D objects
	was also visible on the computer screen and acted like a ball
	which could be 'flicked' with the finger in the thimble and
	bounced off a wall and come back and hit one's finger and the
	finger would actually FEEL the ball hit it. 
Themes:
	Browsing vrs. Searching
	Point & shoot vrs. perfect, all function, do it yourself UIs
		(based on the camera metaphor).
	Direct Manipulation (DM) vrs. Agents, Actors, MS's BOB
	Iterative interactive queries
	Usable UIs for the WEB and interactive TV
	The commercialization of research organizations (Xerox PARC,
		IBM Watson Research Center, ...)
My theme:
	'Yeah, it's cool, but when and where is it actually useful?'
--------------------------- Wednesday -----------------------------------
*Opening Plenary
	The conference opened (8:30 am) with the 1.5 hour 'Opening 
	Plenary' session. About .5 hours was spent on greetings and awards. 
	They apologized that Denver taxis were very reluctant to drive anyone
	anywhere except to and from the new airport, and that the 'fun run'
	was canceled because of too many legal hassles and the cost of 
	insurance. 
	Then one of the most boring presentations ever endured
	was presented. Their one point (that the background environment
	in which the user performs their tasks should be explicitly considered
	during the design process (i.e. their physical tools, organization,
	politics, power, perspectives)) was obvious and boring. The only
	interesting part was when some very old (1930s?) video was presented
	that showed a user (typist) being tested (for typing speed)
	while breathing into a big tube while the tester turned this
	large (2 foot diameter) dial which generated white noise (
	conclusion: the woman typed 10% faster and breathed 19% slower
	in a quiet room).
	But we all did get a free plastic Zippy (tm) letter opener 
	in the shape of a flat computer. It advertises CHI '96.
Short Papers: Information Visualization
	[Missed KidSim paper to see this]
	[Missed some of Creative Prototyping Tools to see this]
Paper: AutoGeneration of StarField displays using Constraints
	High level declarative language generates C++
	Inquires database and displays results on 2D scatter
	plot automatically with widgets auto generated as well
	which control the query. The plot is instantly updated
	each time a query widget (i.e. slider) is moved/moving.
	The high level language has an algebraic expression evaluator.
Paper: LiveMap
	This is a map of weather patterns with clouds and graphical
	and textual annotations. It is updated continuously in real
	time from data off the net.
	Clicking on a city displays video of any active weather
	activity at that city.
	Panning around allows the user to see the parallax created by
	having the layers of information/annotation actually be
	'above' the earth at various heights (i.e. 3D layers). This
	panning makes it easy to differentiate between the layers
	and makes the clouds look really real. The presenter mentioned
	that his set up at the lab allows him to move his head (which
	is tracked by a camera or something) to make the image 
	automatically pan around (no hands!). Also has a (undemonstrated)
	feature where the layers are jittered around at sub-pixel
	distances (apparently Open GL allows, using anti-aliasing, 
	to move things in 1/16 pixel increments). This is 
	supposed to also help the user differentiate between the
	3D layers. The layers (~10) are turned on and off by
	clicking on a legend describing the layers in the LL corner
	of the screen. Runs on a reality-engine...
	Can compress the layers so user can be sure what city an 
	attribute really refers to.
Paper: Designing Glyphs to represent multidimensional datasets as textures.
	Clayton Lewis, CU
	This talk described how to chose a graphic to represent a
	point in multidimensional space. I.E. a point with with
	6 line segments coming out of it (like a star) could be used
	to represent a 6 dimensional data item (i.e. an item which
	has 6 attributes) by varying the line segment's lengths to
	correspond to the data point's attribute values.
	These are generated for each data item and arranged side-by-
	side in a 2D rectangle which then looks like some kind
	of textured material. This is used to try to see patterns
	in the data.
	
Panel: Switch to UI Prototyping Tool Futures 
	Aaron Marcus: [heard about this, did not see much]
	Embed metaphors, paradigms, mental models into application
	development tools. I.E. a knob which turns up the satire
	attribute of an application.
	Michael Muller:
	Three types of prototypes:
	1.	Function oriented: Artifact to show that it can be done and how.
	2.	Process oriented: Artifact to show that it can fit into user's
				current processes and perspective.
	3.	Communication oriented: To be used as a straw man to facilitate
				communication between those involved.
	1 and 2			3
	_______			__
	object			shared understanding
	solitary 		social
	developer->system	person->person, group->group
	command			democratic
	specify			explore
	thesis			synthesis
	Kevin Mullet:
	A prototype keeps you honest, is believable and is testable
	(both user testing and functional). Thot tools [and applications!]
	should have multiple parallel representations so that the user
	can edit using the most natural form (textual, graphical, scripts,
	forms, ...) and that these representations must be synchronized
	so that when one changes the others are updated. (I.E. outlines are
	good for representing hierarchies, arc-node graphics for networks, 
	...)
	We need larger objects (radio boxes instead of radio buttons 
	are needed in UI builders). Wants scalable prototyping architectures
	so that the demo can become a prototype can become the application.
Papers: Programming by Example
	SILK: Interactive sketching for early designs
	Brad Meyers
	This allows the designer to sketch a prototype then
	generate widgets from the sketches. Use gesture recognition
	software to interpret the sketches, handles only about 5 kinds
	of widgets at the present. The tools automatically records the
	entire history of the design.
Lunch:	Had lunch at Chez Thuy Hoa, a Vietnamese restaurant that does
	not seem to be related to the Boulder restaurant of the similar
	name. One block from convention center. Food is only OK [but I'm
	picky].
Papers: Information Access
Paper:	Xerox Parc: Presented a new metaphor: Foraging.
	The user, looking for data, is like a predator looking
	for prey. How successful they are depends on how long
	it takes to find, handle and eat the desired info. This
	can be modeled using ideas from other disciplines that
	look at these issues. (how much energy is taken to get
	that info calorie, will the user starve?). 
	The time spent by a user on a task can help identify the 
	"paradigm":
	Time			Paradigm
	----			--------
	weeks/months		social
	hours/days		Adaptive/rational
	1/10-10 seconds		cognitive
	1/100 seconds		biological
	[implies that some of the user's tasks (domain foraging) occur
	over months and becomes a social activity]
	This is a study of 'informavores'.
Paper:	TileBars : Xerox PARC: 
	The presenter described the process of searching as 'iterative
	refinements of queries' by the user (i.e. an infinite loop of
	query->results->query->...). This tool does a full-text search 
	and divides up the docs into paragraphs of related topics [hard
	to do] and then generates a grid of squares which are filled in 
	with the results of the search.
	Fox example: search for 'weather' and 'Colorado' and 'deaths'
	might generate, for a document:
	
	xoooxxooox	[paragraphs where weather is found]
	ooooxoooxo	[paragraphs where colorado is found]
	xxxxxooooo	[paragraphs where deaths is found]
	The fifth paragraph/section contains references to all three 
	keywords. This technique can reveal the overall relevant 
	content of a group of documents at a glance.
Paper: An Organic User Interface for ...
	"Burn cycles not people" => Asynchronous query processes.
	Used for searching scientific databases which have 17 million
	articles?, 200 million references. Mixes searching and browsing.
	Arranged like (and called) a butterfly with references on the
	left wing and citations (cited bys) on the right. Very 3D/roomish
	interface. "The most scarce resource is the user's attention"=>
	when the mouse pauses on something, a background process auto-
	matically starts fetching related data.
Short Papers: UI Specification and Programming
Paper: ... Demonstrational Visual Shell
	Compared this state-based/iconic language (called Pursuit) with 
	a text based language and recorded user's actions. State based
	lang 2x better in accuracy but not speed or comprehension
	(there was a lot of questions about why and what this meant
	from the audience).
switched to:
Short Papers: Anthropomorphism and Agents
Paper: ...Adaptive Hypermedia
	There are 4 types of people:
	1.	Activists
	2.	Reflectors
	3.	Heuristics
	4.	Pragmatists
	The first 3 liked lectures better than adaptive hypertext,
	which they liked better than static hypertext. Group
	4 likes adaptive hypertext, then lectures, then static 
	hypertext. Adaptive hypertext looks at the user's actions
	and generates links accordingly. The presenter decided to
	increase the value which they personally assigned to 
	lectures based on this study.
	
Paper: Is it the computers fault?
	Programmers blame the computer for some faults, rather
	than always blaming the engineers who built and programmed
	it (even after some reflection). [Does one blame the car
	for breaking down or all the factories that made it?].
	
Paper: Computer Personalities
	A few basic types exist as defined in Psychology. People
	can key in on the type very quickly. So they made a simple,
	text-based pgm to test this. Using the psych truism that
	Submissives like submissives and Dominates like dominates
	they made both a D and S interface:
	D: You must now login
	S: Maybe you should think about logging in now
	Results showed that D's did indeed like the D interface, etc.
	During testing they emphasized to the users that they were
	testing the 'interface' not testing the 'users'.
	4 measurements of a good UI: Friendly, intellectual attraction,
	utility and emotional satisfaction.
	Conclusion: the textual language in UIs is very important.
Paper: Optimal Exploration of a Application experienced for the first time.
	CU people 
	Users traverse differently but always call it optimal. Previous
	application experience affects them. Mentally evaluate a formulae
	with the following arguments:
	Cu = cost of undo
	Cw = cost of a wrong cmd
	Cn = number of commands
	Conservatives avoid mistakes at all costs
	Optimists think there will be something the does exactly what they
	want somewhere in the UI.
Paper: Automatic To-Do list
	Knowledge-based editors .... Nag!
	Expert users construct plans
	Expert plans may have illegal intermediate states
	To-Do lists ... Remind!
	The To-Do list says which menu item to click and optionally can do it
	"for the user". Separate domains for different users - i.e. a layout
	person does not need to know about electrical connections that need
	to be made.
	Q. How much advice to give? esp. when > 1 way to do something?
	Q. How to encode all of this expert info in the list.
WIM - Worlds in Miniature: A small image of the whole world found within
	the big one which the user can use to navigate. I.E. a locator
	window for 3D. Ex: in a 3D architectural walkthru a model of the
	building shows which room the user is in and the user can click
	to position themselves elsewhere. (p. 265 CHI Proceedings).
--------------------------- Wednesday -----------------------------------
	
	
Panel: Interface Styles: Social Interaction Vrs. Direct Manipulation
	CHI is fundamentally social & natural. This is ALWAYS true, no
	matter the UI. It is not possible to NOT have social interaction 
	and emotion where humans are involved. Using this info can help design
	products. Attributes:
	Personality, Politeness, Intelligence, Emotion, Roles, Perception
	[UI Builders need a knob to turn up the politeness of an application
	for submissive users].
	Good UIs are:
	Easy: (this is just what it is to be human: we like things easy)
	Sophisticated: (we want more behind the scenes)
	Fun: (arousal, excitement)
	Examples of social (anthropomorphic) UIs: General Magic, BOB, Lion
	King, Picard in Compton's..., Jane Bryant Quinn in Quicken.
	User's want a restaurant, not a grocery store - don't want to
	worry about ingredients. Want freedom from choice.
	BOB: Physical place: one place to go, rooms, personality, politeness,
	movement. BOB BOB BOB - Bob jokes, Bob is a joke, ...
	---------------
	Ben Shniederman - Direct Manipulation
	People want to feel THEY did the job. UIs should have Rapid, Incremental,
	and Reversible actions. Immediate and Continuous feedback. Judicious
	use of Treemaps, cone-trees, ...
	3 step process: Overview (user sees all of the space and things that
	can be done), Zoom and Filter (narrow down overview into desired area),
	Details-On-Demand (show details if user desired, otherwise lay-off).
	Social/Anthropomorphic UIs undermines users sense of responsibility
	[TRUE?)] and destroy sense of accomplishment.
	---------------
	Speaker: ???
	Machines should know about social processes to motivate, attract, 
	entertain or act as substitutes for a human. The speaker then talked
	about DECFACE? which puts up a face and talks to the user. Did test
	using Game Theory's 'prisoner's dilemma' where the computer asks the 
	user to agree to a plan and the test is whether the user keeps 
	their promise. They don't with non-humans. They tried a:
	talking dog cartoon -> user cheated.
	talking dog video   -> user cheated.
	talking human image -> user cheated.
	talking human video -> user kept promise.
	---------------
	Discussion:
	UIs should keep causality. The failure of talking cars
	and talking ATMs was discussed. Perhaps a car that told
	a joke to the user when the door was opened instead of
	saying 'please put on your seat belt' might have been more
	successful.
	Computers are like T.V.s, not cars or ATMs. TVs have 
	gained more and more personality of the years. But the TV
	is not talking, a person is talking THRU the machine. Line
	between real and unreal is disappearing so computer will
	LOOK real. People talk at the TV so blaming computers for
	failure is in line with this.
	BOB BOB BOB problems: mobile windows, Windows Apps throw
	user into the pgm manager with no obvious way to get back, mixes
	social and computer terminology. Bob is only a surface
	implementation over ordinary menus. Needs deeper functionality
	(info seeking, commiseration,...)
	Same feeling about satisfaction with auto-flash/focus cameras
	which do alot of the work now. Do some users feel they need
	perfection? Similarly with automatic transmissions... Be nice to
	be able to turn it off...
	People want to have deal with humans in some tasks and not
	others: don't want to be bothered with dealing with a real human
	at ticket counters. [But want real human nurses.] People change 
	their minds about this if the software is very very good.
	Computers should greet the user differently from time to time just
	like [some] real humans.
	Poll of audience taken: RESULT 60% to 40% in favor of DM
	over social interfaces!!! (see MAC/Anti-MAC debate).
	[But pet owners talk to their pets!]
Panel: HCI perspectives on info superhighway (NII)
	[mostly boring, no CHI issues, I shoulda been elsewhere].
	Ameritech: User's do NOT want to INTERACT after a hard days
	work.
	Judge Johnson: SunSoft: CBSR policy paper for Clinton
	- Info Highway is a collection of services, not just one
	- Some companies and providers will dominate
	- TV turned out the way it did cause of the way it was funded
	(advertising)
	- Fortune 500 will control it as a vehicle for making money
	- Push data on you instead of you pulling desired data down.
	- Junk mail in TONS.
	- TV won't be computerized, Computers will be TVized
	- Internet and non-commercial local nets will be irrelevant
	- Electronic polling, not electoral discussion: Mobocracy,
	Ophracracy.
	NII: the usability problem is the most demanding of any system 
	ever developed cause of the wide range of types of users.
switched to:
Panel: Browsing Vrs. Search
	---------
	Queries ARE links, Links ARE queries
	To have a Flexible UI means the UI allows for flexible info
	exploration. Hypertext is not a DB, it is a UI. The challenge
	of UI design is to communicate, using a bitmap display, between
	the world inside the user's head and the world (functionality)
	inside the computer program.
	User Defined Queries -> System defined Browsing
	---------
	Furness: Tried to differentiate between Queries and Searches:
		Info Access Techniques: (Query / Navigation)
		Tasks/Activities: (Search / Browsing)
	Maybe query results should be posted in navigation structure
	Query has one kind of semantics and navigational structures
	have another and together might be even better. The semantics
	of structures are either a fn of data of a fn of the domain.
Switch to:
Demonstration: Escalante
	Had a neat way of adding table grids: click on the table button,
	then click on the screen for as many times as the number of
	rows desired (and horizontal lines appear one by one). Then
	click on the screen for as many times as the number of columns
	desired and the table is interactively grown.
	Demo'd a water flow game made with Escalante where the user
	(child) puts pipes together and buckets to catch the water... 
	all built upon the 'GrandView' visual pgming environment.
Switched to:
Papers: Advanced Media for Collaboration
	Good UIs provide users an invitation to interact: using audio
	and visual cues.. Why WWW is successful? Amusement, Exploration,
	Socializing. People like using them but the time spent doing it
	needs to be justified (by seeking information and using it 
	for communication).
Lunch with folks from IBM Watson Research Center that have been doing
a project that has a UI similar to Neli's HiLife project for 5 years now.
The woman (name??) had an idea I liked about expert system specification
interfaces which is that they need to be accurate, simple, and 
"reviewable" so that experts can review what they or another expert has 
done so that they can trust the system.
Panel: MAC versus ANTI-MAC --------
	? from Sunsoft: ANTI-MAC guidelines
	- Computers are not desktops or rooms
	- Not always see + point, use language like humans
	- Direct-Manipulation keeps the user on an assembly-line
	type environment - there should be agents and languages
	- Humans can't control complex things, and don't want to control
	boring things.
	- The real world is diverse and rich
	- WYSIWYG is really more like What You See Is All There Is
	- Instead, represent meaning explicitly - use multiple views
	- Sometimes mode-full-ness is useful, the computer SHOULD
	know what we are doing, so it can help us
	
	MAC		ANTI-MAC
	___		________
	Metaphor	Reality
	DM		Delegation
	See+point	Describe and Command
	Consistency	Diversity
	Austin Henderson - Apple ----------
	BORING, let's stay in the nice safe boat where we are comfy
	[I kid you not!]
	Jacob Neilson - SunSoft -----------
	Referred to Bill Buxton's graph which shows human capability
	as a flat line and the functionality and complexity of machines
	(UIs) ever rising to infinity (the lines crossing over at 
	about the std VCR recording UI) [as delivered at CHI 92, we
	were there!]. Jacob, however, shows a graph where humans,
	after year 2000, actually get smarter, cause they learn about
	how to use computers (the Nintendo generation).
	Also: the learning time between, say, a pictograph of a bird
	and the text: B-i-r-d is very different, the pictograph taking
	far shorter time to learn (MAC). But humans have found the
	alphabet quite useful and worth the time it takes to learn.
	The advantages of language based UIs are one can have hypotheticals,
	variables, criteria to select by, ... It is OK to spend some time
	learning a script language, esp. if one will spend whole life using
	it (computers). 
	The modern computer has 240 times more information than the 
	MAC classic (based on screen size). Soon screens will have
	an additional 340 times more information available ==
	81,600 times more information than MAC + sound + video + ...
	Don Norman - Apple ----------------
	[Don Norman is a fountain of cliches and thinks all computers
	suck (a die-hard ubiquitous-computing fan).]
	"One app fits all => the app fits nobody well"
	Currently we have:
	1.	GUI
	2.	WYSIWYG
	3.	DM
	4.	Hierarchical File Structure
	5.	Large Proprietary OS
	6.	Large complex Apps and proprietary data fmts
	7.	The std business model (planned obsolescence for software
		... upgrades, bugs, must haves).
	Wants longer lived software => change business model. Referred to the
	writer's model (originated by ???) where books were:
	Handwritten		-	Mainframes
	gutenburg press		-	Desktops
	paperback book		- 	intimate mobile computers
	Wants: "Computers that make it easy to live the way we want to."
	People don't want big screens, they want small mobile computers.
	------- Sun Rebuttal -----------
	Language has been standardized for much longer than MAC UIs,
	last longer than UIs (BASIC, Fortran, COBOL, are examples)
	and are portable. Lang has ~2 million years of usability testing.
	Do we want a Metaphor or a real Developmental model of the task 
	at hand?
	----- Apple Rebuttal -------
	Languages are standardized? The intersection of all Fortran dialects 
	is the empty set.
	------ Sun Rebuttal - Neilson -----
	Subscribe to functionality over the Net and get updates for
	the features that are desired automatically, as they are released.
	Other useless (to this particular user) features are not included at
	all in the app.
	[Cool idea]
	Your data is yours, your pgms are commodities.
	We want mobile computing AND big screens.
	------------
	Questions + Answers
	Want human editors/moderators of large amounts of data.
	Teach kids English today, should also teach them to use
	computers (languages).
	People who grow up with computers are better at using them.
	We do a lot of things the computer DEMANDS us to do, not what
	WE want to do.
	Audience polled: the VOTE: 2 to 1 in favor of the MAC.
	[I am still in shock].
Demonstrations: Tools for designing interactive services
	------ NIC: Interaction on WWW --- Dan Olsen - Brigham Young
	HTML document -> HTTP server -> HTML Browser -> MIME -> NICIU interpreter
	This format is defined for the specification and transport of whole
	UIs (not just data). Comes with:
		UI interpreter client
		UI Editor (generates format)
		UI (custom) Widget designer
	Others that are similar: Tcl, Sun's JAVA/HotJAVA, Scheme?, Oblique
	More info at http://issl.cs.byu.edu/home.html
	Example: see a NIC calculator on a HTML doc, click on it, and
	the code is downloaded that generates the widgets for the calculator
	as well as it's behavior - no configuration or installation
	required. Still researching security requirements.
	---- DynaDesigner - AT&T Bell Labs
	A Visual Pgming environment so apps can be created in
	hours by non-pgmrs on multiple platforms. Updated recently
	to support Set-Top_boxs (STB). Generalized UI API after seeing
	commonalties between platforms (user dialog: present info,
	present options, get input from user). Generates a high
	level description language (ala NIC above and Visual ADE
	and ...) that is sent to an execution environment that
	contains a state machine and a presentation unit. Has
	a simulation capacity that displays (mock-ups) of the 
	screen so that the app can be tested w/o the destination
	platform.
	This has a cool visual language editor that has icons 
	representing forms, database operations, ... and these
	icons have lines extending out which represent the
	possible results of the icon's operation. This is
	really similar to the dataflow language editor: AVS.
	I have never seen STBs that have menus and stuff, maybe
	it puts the UI on the TV itself. The user uses a universal
	remote control to select options in the STB's UI. So at
	the bottom of the menu is a bunch of icons that represent
	FastFoward, Stop, Record, ... and next to the icons are
	what operations the icons correspond to.
	The menu widgets are automatically generated. This tool
	helps/makes the creator focus on *Content Specification*
	not *UI Design*. When the application is simulated, the
	icons in the Visual programming editor highlight to
	indicate where the pgm is. Recently added HTML generation.
	Embedded UI cliches in language to raise the level of
	abstraction. Cliches: 
		- look-up data, user selects data
		- user input data, verify data
		- user input key, verify key, load data
	Allows pgm'r to customize menus a little bit: background 
	bitmaps, color
--------------------------- Thursday -----------------------------------
	
Short Papers: Drawing, Painting, and Sketching
	------ 3D Painting
	Allows user to actually draw on a 3D surface. Layers
	contain 2D and/or 3D objects, Objects can span layers. Does
	NOT have two handed operations (where the user holds the
	object in one hand and paints it with the other).
	[Not sure what was special about this app.]
Switched to:
Demonstrations: Accessing Information
	---- Personalized Galaxies of Info
	[Missed this because I was at the above paper but Neli and David say
	it was very good].
	---- Hyper-G 
	Hyper-G is a language like HTML (or a protocol like HTTP?) and Harmony 
	is it's client (like Mosaic). Draws red rectangle around pickable hyperlinks.
	Uses WaveFront's 3D data format. Uses Xerox Parc's 3D flying technique
	where the user clicks on some object in a 3D scene and then is flown
	there in such a way as to approach the destination head-on (along
	the destinations normal vector). Harmony has a display that
	graphically shows where the user is with respect to hyperlink-space
	(has a bunch on lines converging on a point (the user) then a bunch
	on lines fanning out from the point). I18n done in client. Has a 3D
	viewer of directory? trees where the user is at the root and the
	leaves extend toward the horizon. It dynamically loads nodes
	in this tree browser as the user pans around. [So this seems
	more mature and complete and powerful than HTML - but no accounting
	for why one thing succeeds and another doesn't, They ARE making
	HTML<->Hyper-G gateways].
Papers: Info Visualization
Paper: Hyperbolic Trees - Xerox PARC
	Goal is to move work from user's cognitive system to user's perceptual 
	system. Animated transitions help this (The user pans around by 
	selecting a node to be the 'top' or 'center' node and then the graph
	smoothly animates the move to this new position. Draws straight
	(instead of hyperbolic) lines during this operation (runs on
	a Sun 10 - no reality engine here)). The theme for this paper
	is Focus+Context which means that you can see close up where
	you are but can also see the entire space in the same picture/graph.
	(i.e. no scrollbars allowed). Can see appox. 600 - 1000 nodes
	in the graph at once. 
	Thistle leaves are hyperbolic planes - this allows these 2D
	leaves to attempt to fill a 3D space and be less edible.
	Hyperbolic mappings preserve angles and shapes. Lots of effort
	went into making this work:
	1.	Preserve orientation of root node and lines extending
		from this node.
	2.	Keep time of animation down to one second. The system used
		has a continuous loop (render(), animate(), ... governor()).
		If the governor finds that > 1sec has elapsed since last
		time it was visited it tells everyone else in the loop
		to go faster (and they then reduce their quality).
	3.	The nodes get smaller and smaller as they are farther and
		farther from the center node.
	Future:
	4.	Address disoriented user problems by using landmarks or coloring
		the path from the current node back to the root.
	5.	Some method to show how deep in the hierarchy a node is.
Paper: GeoSpace - MIT Media LAB
	[Kind of a Touch Screen GIS for Ordinary Folk]
	Plans/Rules are builtin to the system so that when a user
	wants to learn X then a number of steps are followed, which
	have assigned visuals, that teaches X. 
	Ex: "Learn Transportation Systems in
	Cambridge" means the user wants to learn about subways, streets,
	... and a number of steps are followed which show this info to the
	user. The user presses on city Y, then roads near there become
	larger and clearer and roads farther away become dimmer. There
	is a legend where the user can specify how important, from 'not
	at all' to 'somewhat' to 'very' it is to see airports, landmarks, ...
Paper: Movable Filters - Xerox PARC
	Combines 2 previous good ideas into one: 
		Magic Lenses + Dynamic Queries.
	Assign a filter/query to magnifying glass. Order of lenses is used
	to specify order of Boolean queries. Also supports Fuzzy Queries
	(queries which return a value between 0...1 instead of just 0 or 1).
	There are lenses that sort and lenses (soon?) that make bar charts.
	The example was a USA map that had cities as little squares, The
	squares underneath the lenses are partially filled if they partially 
	match the queries. [This is cool And fun And, I think, useful.]
	
Discussion by Steven Roth: 
	1. Hyperbolic trees:
		Supports the tasks:
		a. Learning about STRUCTURE, about the HIERARCHY
		b. Finding particular nodes and relations between nodes.
	2. GeoSpace
		Supports finding and exploring relations among diverse
		information
		Eliminates DB details and much of the UI
	3. Movable Filters
		Supports analysis by identifying and filtering subsets of the
		information
	Q. How much Context is there? How much do we need? and When? and what
	is the Context? Who is the User?
	Tasks:
		1.	Domain
		2.	Data + Resources
		3.	Interface
	
Short Papers: Web Browsing and Navigation
Paper:	Auditory enhancements for Mosaic
	Modifications, made to browser only, to add auditory feedback (sound) to:
	- shift cognitive load
	- monitor background processes
	- reinforce visual events
	- increase engagement
	- avoid visual miscues
	
	In particular to communicate sound info about:
	- progress of data xfer (startrek door-opening sound when done)
	- info about link destination
	- feedback for user actions
	i.e. Progress, Errors and Switching to another program
	Uses click/clack sound for motif pushbuttons [good idea! let's user
	know they did something].
	Tries to use sound to give info about destination file type/size/errors
	(i.e. missing links). Sound effect design is not well researched. Sound
	is at a low level so doesn't disturb cubemate.
Paper:	DeskCape - a new paradigm for browsing the Web
	Metaphor is a deck of playing cards, each web page is a card.
	A deck is a window, cards overlay each other in the window.
	Can cut and paste between decks, create new ones, ex: a hot-list
	deck which is sort of a clipboard. Can click on a link and the
	result is displayed to the right, and DOESN'T remove orig card. Can
	request that all pages that are linked from current be brought 
	in. Can search for strings of text in these pages/cards.
Paper:	CyberBELT
	This is like a few other UIs (StarFields, ...) which have
	a zoomable scatterplot to the left of a control panel that
	has widgets that specify complex queries. This app searches
	movies that are indexed by 10,000 actors, time of release,
	etc.... Queries are controlled by widgets (sliders, ...) and the
	display is automatically updated when the user changes
	a widget/query. Neli says the user can click on one of the
	(zoomed-in) dots and see a video clip of the movie.
Paper:	VGrep: Graphical tool for exploration of textual documents
	Jeffrey McWhirter, CU
	[I missed most of this :-(]
	This visualizes text (and source code!) by reducing lines of text
	to short multi colored lines that indicate semantic and contextual
	information and uses indentation to display what large numbers 
	of lines of text are all about.
Paper:	Showing the context of nodes in the Web ...
	To help those lost in hyperspace. Use overview diagrams
	which show context of nodes with respect to important landmarks.
	Algorithmically defining landmarks is what is mostly discussed
	here: Nodes with high backorder and forward connectedness, large 
	amounts of usage...
Paper:	Shared Web Annotations
	
	Architecture:  
		Web Document Server ------------>
						|
						|----> merge (Browser)
						|
		Annotation Document Server ----->
	I presume that they hacked Mosaic or some other browser to look
	up the URL of a WEB document in the Annotation document Server/
	repository and then downloads any annotations that have been 
	assigned to this WEB Document.
	Words in the document are highlighted if they are annotated and 
	to the right of the highlighted word is a picture of the person
	who did the annotation. This picture is of the size of the text font
	and is like a photograph. So at a glance the user can scan a document
	and see what and who annotated the document. Pressing the right-btn
	on the mouse on a annotation displays a yellow-sticky type window
	with some textual annotations. Dbl-Clicking (I believe) pops up
	a new browser window with the annotation and follow-up annotations
	and keeps the original document window still visible.
	Social/dynamic implications are that there will be annotation servers
	that subscribe to a particular viewpoint or maybe a particular person.
	These annotation 'webs' then will be a whole new web built on top
	of the current WWW. Example: Carl Sagan could have an annotation
	for those of us who want to know which sites he gets HIS info from.
	or Teachers could post an annotation for what they want their students
	to look at...
	Problems: The annotations and original web docs need to be kept in
	sync and so will need version controls on both. Right now the annotations
	communication data flow uses a special protocol, Not http.
	[This is great, Webs on top of webs].