UCGIS Virtual Seminar - Fall 1998 [Back][Refresh][Options][Search] Part 1: Background [Edit*][Delete*] [Image] Part 1: Background topics Dawn Wright 09/27/98 [Image] The challenge of including time E. Lynn Usery 10/01/98 [Image] [Image] RE: The challange of including James Nichols 10/05/98 time [Image] [Image] re: the challenge of including Ronald William Ward 10/06/98 time [Image] [Image] RE: The Challenge of Including Bill Moseley 10/06/98 Time [Image] [Image] reply to Bill's observations Ronald William Ward 10/06/98 [Image] [Image] 3-D clouds Bill Moseley 10/06/98 [Image] [Image] RE: The Challenge of James Nichols 10/06/98 Including Time [Image] [Image] the ears Wilmot Greene 10/06/98 [Image] [Image] Re: sound Ronald William Ward 10/07/98 [Image] [Image] Re: Sound Erik Shepard 10/07/98 [Image] [Image] (edited) Wilmot Greene 10/07/98 [Image] [Image] sounds from everyday Ronald William Ward 10/07/98 experiences [Image] [Image] Sonification and ADR for Byong-Woon Jun 10/07/98 Representing Spatio-Temporal Variation [Image] [Image] Re:The Challenge of including Byong-Woon Jun 10/06/98 time [Image] [Image] Re: The challenge of including Erik Shepard 10/06/98 time [Image] [Image] Some further thoughts Erik Shepard 10/07/98 regarding time [Image] At the risk of sounding simplistic, I Deana Pennington 10/06/98 think I'll dive in her... [Image] [Image] Comments on benefactors Erik Shepard 10/07/98 [Image] [Image] Comments on D. Pennington's Ronald William Ward 10/07/98 observations [Image] [Image] Comments on comments Erik Shepard 10/07/98 [Image] [Image] Re: comments on comments Ronald William Ward 10/07/98 [Image] [Image] we'll figure it out Wilmot Greene 10/07/98 [Image] [Image] Reply Deana Pennington 10/07/98 [Image] Comments on representation of inexact Byong-Woon Jun 10/07/98 spatial concepts [Image] [Image] Inexact Representations James Nichols 10/07/98 Extensions [Image] Post new message in this thread ------------------------------------------------------------------------ [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: September 27, 1998 09:40 PM Author: Dawn Wright (dawn@dusk.geo.orst.edu) Subject: Part 1: Background topics Background Discussion will include: Limitations of current representations based on map model Three dimensional representation, i.e., full 3D GIS --Planar representation model (GIS) --Spherical representation model Mathematical model representation versus data-based representation (GIS) Representation of inexact spatial concepts, e.g., near, far, above (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1566) ------------------------------------------------------------------------ [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 01, 1998 01:10 PM Author: E. Lynn Usery (usery@uga.edu) Subject: The challenge of including time As a challenge to students in this seminar, I invite you to watch a local news weather forecast and comment on the following. Generally, these forecasts contain attempts to show geographic data over time, such as satellite images of cloud cover for a 3 to 12 hour period. Simultaneous to "animating" the many images, a clock is often shown ticking the hours in synchronization with the animation. Often I find these presentations confusing, particularly when the clock is omitted and only the sequential images are shown. This "GIS" presentation brings several representation and visualization questions: 1) What representation of data is necessary for this graphic visualization? 2) Can better visualizations be prepared to convey this type of information? 3) How can the representation of the data be enhanced to better support the visualization requirements? 4) Can you suggest a representation model for dynamic spatial data? (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1642) ----------------------------------------------------------------------- [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 05, 1998 01:25 PM Author: James Nichols (jnichols@extension.umn.edu) Subject: RE: The challange of including time I see the major difficulty in deriving useful information from these animations is the fact that the spatial and temporal components cannot be examined simultaneously in any detail. If you focus on the spatial animation you cannot possibly concentrate on the time and if you focus on the time you cannot discern much detail from the spatial information. A somewhat more successful representation is possible if the observer is able to "slide" a time bar back and forth, stopping it at time-locations to be viewed in more spatial detail, or if a user could select a specific spatial extent to view while the time changes. Perhaps some combination of these options could be implemented, somewhat integrating the spatial and temporal components. The ability to control the space and time of the animation seems critical if you expect to extract useful information. (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1690) ----------------------------------------------------------------------- [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 06, 1998 07:33 AM Author: Ronald William Ward (ronward@arches.uga.edu) Subject: re: the challenge of including time In reply to Lynne's questions on representing (and visualizing) weather data: 1) What representation of data is necessary for this graphic visualization? With sequenced images of cloud cover, I think the representation comes from satellite imagery, which in in the form of pixels. These pixels are arranged in a gridded array of cells - so the imagery currently in use is in rastor form. The representation stems from some set resolution of pixels. 2) Can better visualizations be prepared to convey this type of information? I think the data for better visualizations is there. I'm thinking of data collected by weather balloons and weather aircraft, which has a 'z' attribute for height attached to it. The trick is to get away from looking at data collected at different heights in the atmosphere as various weather attributes (temp., humidity, etc.) attached to their respective z-attributes (height), and begin looking at height volumetrically in a third dimension. In the case of the first and second dimensions, we do not look at these dimensions as attributes in and of themselves. Instead, we take distances and assign attributes to this first dimension (for example, roads), or we take areas and assign attribute information within this second dimension (as in land classification of polygons). If we begin to look at z data as the centroid of a volumetric cube, that is, look at z data as a third dimension and not an attribute, then we can begin to assign temperature, pressure, and other sorts of weather attributes to these cubic spaces - and then begin to look at the spatial distributions of weather phenomena in the three dimensional space it occurs in. 3) How can the representation of the data be enhanced to better support the visualization requirements? Three dimensional visualization requires better methods of volumetric representation. We need three dimensional interpilation algorythms that treat cubes of space the same way rastor based systems treat the second dimension as quadrates. Then we need query functions for inserting attribute information into cubic space. 4) Can you suggest a representation model for dynamic spatial data? I wish I could. J. Nicols brought up the idea of 'smooth transition imagery' from one slice of time to the next. There is a software program (the name of which I cannot remember) that takes two images and, by using a sliding scroll buttion, gradually meshing one image into the next (we've seen this used as an eye catching visual effect on commercial television and in film). Such a program might be used, for example, to take 10 minute slices of volumetric weather data and mesh six together to visualize an hour of weather change for a given volume scale of atmosphere. I wonder though - would you have to construct seperate volumetric sequences for each weather attribute (one for temp., another for pressure, and so on) or could you use transparent and mixing colors to show several weather attributes moving through the volumetric space together? If we had this capability, could anyone read it? This brings up the question of cognition of dynamic, volumetric data. To begin with, few peaple would be able to read such a 3-d moving map - but just as many people can now follow the limited visualization capability of satillite sequences, in time people watching 3-d moving weather maps on the evening news would get used to it. Ron Ward (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1715) ----------------------------------------------------------------------- [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 06, 1998 07:42 AM Author: Bill Moseley (wmoseley@uga.edu) Subject: RE: The Challenge of Including Time The time dimension is different for a number of reasons, including our ability to visualize it. We tend to visualize time through the change of attribute data. Is this really that different than the way we visualize space? Is not space visualized because of the attributes that cover it? While attributes cover space, do they cover time? Back to reality and Dr. Usery's weather forecast example. I assert that what we are really concerned about in time analysis is change (would time exist if nothing ever changed?). One of the frustrating aspects of an animated weather map is that one only sees one point in time at a time. Unless you have photographic memory, it is difficult to analyze change because you forget what was previously on the screen. While a bit old fashioned (in a 2-D fuddy duddy kind of way), I prefer maps that depict overayed coverages of the same attribute from several different points in time. This allows one to focus in on change - the essence of time. I look forward to your comments and criticism. (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1716) ------------------------------------------------------------------ [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 06, 1998 07:59 AM Author: Ronald William Ward (ronward@arches.uga.edu) Subject: reply to Bill's observations Bill, I think you might be missing to point. Viewing changes in attributes over time in the second dimension does just that - uses space as a dimension and not an attribute. Consider Lynnes weather example once again. The atmosphere, at least that part of the atmosphere in which weather is played out (the troposphere), is ~10km thick. Yet, when we see weather maps in use these maps are showing phenomena which take place in a voumetric space transposed onto a two dimensional space. So then, how can you focus in on 'coverages of the same attribute as it changes over time' when the weather attributes being depicted are not being show where they actually are with respect to one another? Now, if we want to compare say surface weather conditions with the direction they are being pushed by upper level winds (like the jet stream), we use two flat map images, one of surface conditions and another of upper level condition, are try to compare the two in a 3-d space which exists only in our minds. The challenge is to represnt weather attributes in the space within which they are played out - in a 10km thick volumetric space - then move these spaces over time to show the changes. Weather is not a 2-d phenomena! sorry Bill, Ron Ward (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1717) ------------------------------------------------------------- [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 06, 1998 08:54 AM Author: Bill Moseley (wmoseley@uga.edu) Subject: 3-D clouds Ouch!! Ok Ron, what if I said more or less the same thing with the clouds being 3-D rather than 2-D (good thing I don't teach physical geography). I think what I was trying to focus on was "attribute change" as the best way to visualize time. To quote Heraclitus (540-480 B.C.), "Nothing endures but change." I would add that I do not view space and time as either attributes or dimensions. I think attributes (e.g. clouds) can be modeled in space and time. Space has 3 dimensions as does time (past, present and future). Perhaps the generic concept for space and time is extent: i.e. the spatial and temporal extent of an attribute. Cheerio! (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1720) ------------------------------------------------------------------ [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 06, 1998 08:38 AM Author: James Nichols (jnichols@extension.umn.edu) Subject: RE: The Challenge of Including Time I think the most important point Bill makes is that this visualization method commonly used to depict changing attribute data (in 2-D or 3-D) is very unfriendly to anything more than cursory visual analysis, "you see only one point in time at a time". Since 2-D and especially 3-D depictions can contain an overwhelming amount of visual information, the solution of incorporating another sensory option (hearing, touch) to interpret the temporal component of these animations is very beneficial, if not absolutely necessary. (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1719) ----------------------------------------------------------------------- [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 06, 1998 08:10 AM Author: Wilmot Greene (mot@uga.edu) Subject: the ears Here I go again with the sound thing,,, A maior problem with this sort of temporal animation is that the phenomena inside the map area is changing at the same time the reference clock is changing. What this means is that in order to know what time is being represented by the data you must take your eyes off the data. Well, that is where the ears should come in. The meteorologist could say "here's an animation from noon tuesday to noon thursday" and then as the animation is shown on the screen an ascending tone could accompany the picture. The "ascending tone" would simply be a tone that rises exactly one octave during the animation. Human ears are amazingly adept at distinguishing octaves. In western music, an octave is comprised of twelve notes ( more in some cultures ), but in this example the notes could be played in a continuous manner ( like a pennywistle rather that a piano ). The benefit would be that viewers could see the weather data with thier eyes while simultaneously hearing the temporal data with thier ears. This may seem confusing, but it would work. Most people would be able to estimate the time to within several hours once they were accustomed to the concept. There have been many research projects done on sound in computer information presentation. The applications in the mapping sciences have just begun to be examined, but it makes sense. Why just use our eyes when we have other (arguably more sensitive) senses. For references, find the journal of "human-computer interactions" or read John Krygier's chapter in "Visualization in Modern Cartography" I will certainly be relating the use of sound to other topics during this seminar, please be patient with me,,, Thanks, MOT (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1718) ------------------------------------------------------------------ [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 07, 1998 06:31 AM Author: Ronald William Ward (ronward@arches.uga.edu) Subject: Re: sound Again, an exciting idead, Wilmot! I think of those satilite and dopler images on the weather channel - if for each one hour advance (or maybe 5 minutes in the case of approaching severe weather) a sliding tone were to jump one octave, I think time and movement over space would be easier to interpret. The trick now is to develop sound representations using tones within the range of human hearing - or is it. Do we need mathimatical representations of sound before applying them to a weather map? What about the public? Is there a place on the netwok news for the kinds of tones you're talking about. Ron Ward (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1755) ------------------------------------------------------------- [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 07, 1998 06:43 AM Author: Erik Shepard (shepard@uga.edu) Subject: Re: Sound One contention Ron; if you jump the sound an octave for each hour, you will probably get 4 or 5 hours of time that you can "display". You should probably use smaller intervals - like traversing the major scale through the octave. That way, you will get 7 useful notes per octave instead of one and will give you much more data. A crazy idea (after my little diatribe about pure versus applied research): how about using major scales to display good weather (the major scale gives most people a "happy" feeling) and minor scales for bad weather (which give most people a "sad" feeling). Or even variations of these! (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1756) -------------------------------------------------------- [Top][Previous][Next][Print][Reply][Edit*][Move*] [Delete*] Date: October 07, 1998 06:52 AM Author: Wilmot Greene (mot@uga.edu) Subject: (edited) Sure, there are many cultural associations with different combinations of notes. One important thing to consider is musical tension. I accounted for that in my example because the beginning and ending notes would be the same (in different octaves) this gives people a sense of closure. also in my example the continuous ascending tone (penny whistle) would make the sound cross cultural, neat huh? Some basics, the piano (western music) has 8 octaves, 12 notes each, (96 notes) however, research has shown that we must eliminate the extreme ends to facilitate perception by the masses. Also, there are eight abstract sound variables, pitch is just one. Timbre, duration, loudness, etc. (again I refer you to Krygier's chapter). This discussion has been limited to pitch, which is arguably the simplest of the sound variables. How about changing the timbre of the tone as time passes to indicate weather quality? that way the number of notes wouldn't decrease like it would if a diatonic or even chromatic scale were used (7 & 12 notes/octave respectively). The range of possible timbres, or note qualities, is virtually infinite. ** Chet Atkins tone for sunny day Jimi Herdrix tone for hail storms ** Silly example, but you get the point??? In general, sound variables NEED to be incorporated in geographic representations, and the most obvious and simple implementation of this is the sound variable of pitch to represent temporal data. Mot (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1758) --------------------------------------------------- [Top][Previous][Next][Print][Reply][Edit*][Move*] [Delete*] Date: October 07, 1998 08:39 AM Author: Ronald William Ward (ronward@arches.uga.edu) Subject: sounds from everyday experiences I like Erik's idea of using more than just whole octave ranges for sound 'symbols.' The more available tones the more information can be attached to them. But what about sounds we hear all the time? Wil, you brought this up when we were talking about you thesis one day - that it's not necessarily the tones you would want to use, but some sound indicating the type of terrain the student is moving the mouse curser over (on a contour map - in order to understand the concept of lines being clocer together signifying steep slopes and so on). So you idea was to have a rough sound for rocky terrain, an wet sound for water surfaces... For weather maps the same principles cound apply. Sounds for rain, wind, hail, dust storms - all of these sounds could be applied to an interactive weather map to make even the most complicated interactive 3-d animation weather map (whew) easier to read. Ron Ward (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1764) ------------------------------------------------------------------ [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 07, 1998 10:35 AM Author: Byong-Woon Jun (bwjun@arches.uga.edu) Subject: Sonification and ADR for Representing Spatio-Temporal Variation There have been several research needs for cartographic sonification and auditory data representation as Will has mentioned. The process of including audio information to complement visualization is generally known as sonification, and the representation of data or meta-data through non-speech audio as auditory data representation (ADR). I think this approach has some implication to representing and visualizing spatial-temporal variation including weather forecast example. Sound exists in time over space, and vision exists in space over time. Sound can provide a means for presentation or reinforcement of information that is difficult to visualize. Quanfification of sound is a natural mode of its perception, and its description is a mature science. Implementation of quantitative psychoacoustic models can aid in the sonfication of spatial-temporal data. This sonified visualization can be used for exploration, analysis, and experimentation with spatial-temoral data. Multimedia/hypermedia displays of spatial-temporal data using sonfication may be a paradigm shift for cartography. Hey, Will! For more information, please refer to the followings: C.R. Weber, 1998, "The Representation of Spatio-Temporal Variation in GIS and Cartographic Displays: The Case for Sonfication and Audiotory Data Representation", M.J. Egenhofer and R.G. Golledgem (eds.), Spatial and Temopral Reasoning in Geographic Information Systems, New York: Oxford University Press. Good luck with sonifized visualization! (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1768) ----------------------------------------------------------------------- [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 06, 1998 12:58 PM Author: Byong-Woon Jun (bwjun@arches.uga.edu) Subject: Re:The Challenge of including time I'd like to give you comments on four questions which Dr. Usery addressed. Q1: The representation of data is based on raster data (called grid cells) and thus location-based. Mainly, the representation uses USGS DEM data as a background and a time series of cloud cover data extracted from satellite imagery as a foreground feature on the display. Q2: I think serveral visualization techniques can be prepared to convey this type of information. First, interactive animation such as slide bar would help us find easily change of times over space as James indicated above. In this case, the animation technique can be used as an exploratory tool to detect similarities or differences in distribution within a series of cloud cover map. This is especially possible when one can interactively access the individual frames in an animation and quickly switch between individual images or image sequences. Second, we can put numeric time characters or a clock on the imagery so that we easily detect the time change by system time. Third, we need to extend its 2 or 2.5 D visualization to full 4D visualization for our better cognition. For example, volumetric or navigational data viusalization including full 3D terrain suface and cloud thickness would provide us with useful information about weather. Finally sonification and auditory data representation to complement the visualization might be an alternative as Will indicated. Q3: In order to better support the visualization requirements, we need to extend current 2D or 2.5D, and static data representations and data model to volumetric (3D) and dynamic represenations and data models (full 4D data representations). Q4: It seems that most of dynamic representations of spatial data are based on Sinton's proposition that of three dimensions including space, theme, and time, one is fixed, a second is controlled, and the third is measured. This view is supported in GIS because most data sources are maps, and maps usually fix the time, control the theme, and vary the spatial location. Dr. Peuquet presented a triad conceptual framework for the representation of temporal dyanmics in GIS that unifies temoral- as well as locational- and object-related aspects and that incoporates concepts from perceptual psychology, artificial intelligence, and other fields. Dr. Usery also proposed a conceptual model (called feature-based GIS) for structuring features in a GIS which includes spatial, thematic, and temporal dimensions and structures attributes and relationships for each dimension, and which is built on concepts from region theory in geography, category theory in cognitive psychology, and data modeling theories, including abstraction and generalization concepts in cartography and GIS. He argued that the rich feature construct has potential application for spatial analysis and sophisticated geographical process models. (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1734) ----------------------------------------------------------------------- [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 06, 1998 02:56 PM Author: Erik Shepard (shepard@uga.edu) Subject: Re: The challenge of including time With regards to this question of including time, I think that some very good points have been made. Ron's point about 3D data vs 2D is valid but does in some ways complicate the situation beyond it's current complications. The data representation that is currently used, as has been pointed out, is raster based and derived from satellite imagery. One possibility which would be interesting to explore is the concept that we have examined a little bit regarding make attribute data a mathematical model that is a function of time. The advantage to this, of course, is that it alleviates the problem of "snapshots" which are discrete since the model itself is continuously growing and evolving. The difficulty here, of course, is that of deriving the model. Global climate models have come a long way, but still are simplistic at best. Still and all, it can be done. As far as visualizing this data (and back again to Ron's 3D points), one technology that I think has a lot to offer is the whole "VR" paradigm. Of course, to really look at this data, we would have to extend the VR paradigm to not only allow the user to vary spatial position but also temporal position (vis a vis the comment about a time "slider"). The VR concept, I think, is a good one in that it lets the user decide the perpspective (planar versus volumetric, inside or outside of, above or below, etc) and to view the data as it best benefits them. By further allowing the user to vary the time, they could examine the changes as they occured and concentrate on the area of most interest to them. (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1738) ----------------------------------------------------------------------- [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 07, 1998 06:07 AM Author: Erik Shepard (shepard@uga.edu) Subject: Some further thoughts regarding time Yesterday when I posted about using mathematical models (which I think is a good idea) I lamented the fact that we really don't have good enough GCMs to really predict everything (at least not without several Cray supercomputers). After thinking about it, though, we could use statistical models or even spline functions to post-fit the data we already have. We have so much data that it seems like we could condition either of these fairly well, and this would be useable for continuous modeling and even short-term prediction. Just a thought. (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1752) ------------------------------------------------------------------------ [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 06, 1998 02:58 PM Author: Deana Pennington (penningtond@geo.orst.edu) At the risk of sounding simplistic, I think I'll dive in here... My background is in programming, but I have no hands on experience with GIS. Yet, some of the problems faced are not unlike those faced when designing other software applications. It seems to me that there are two separate arenas that need to be discussed: representation for a non-technical public, vs representation for technical analysis by specialists in a given field who are also very familiar with GIS. Solutions in these two arenas are likely to be quite different. For the non-tech types, simplicity is always a rudimentary requirement. Conversely, techies want as much detail as possible, with the ability to pick and choose what they show on the display. Obviously, in this seminar, our bias is towards the latter! Yet, the question posed by Lynn is for the first group (weather-forecast watchers as a whole), so that is the group I will address. In the given example, a non-technical public, confused by a series of satellite snapshots, is not going to have the matter cleared up by adding depth perception, bells and whistles! They want several basic questions answered: 1) How intense is the storm?, 2) Is it coming my way?, 3) When will it get here?, and 4) How bad will it be when it gets here? In the past, meterologists simply answered those questions. Now, they "show and tell". These displays are much more effective if the meterologist adds a few simple explanations. "The dark spots are areas of intense storm activity, so you can see in this first shot, taken three hours ago, that the storm front was over Dallas, trending this direction. In the next picture, taken a few minutes ago, you'll see the storm front has moved west and is now...." The question is, can we add simple graphics to the display so that the "tell" part is unnecessary (except for adding a personal touch : ) ? We have to do three things: focus their attention on the primary aspect(s) that they should be watching on the image, interpret that aspect for them, and give them the ability to connect that aspect between the two images, so they can perceive what change has taken place. Of course, then it would be really nice to be able to project the image into the future and show them one or more future "predicted" images, something they will be doing mentally anyway, if you've done your job well. To focus attention, its hard to beat shading, which some of the displays already use. However, in many cases, graphic overlays using the standard meterologic symbols that everyone is familiar with would help. Not only do those symbols help focus attention, they interpret, and they also predict. Everyone knows if they see a storm front line, with arrows pointing east, what to anticipate in the next shot. This anticipation is important in being able to mentally connect the two images. When the second image appears, we already know what to expect, and those same symbols instantly make the connection. Even if "smooth transition imagery" were used to enable "connection" between the images (an idea I like), the main point of the images would still need to be interpreted for the public, who are not trained to look at satellite images. It is important to give an indication of how much time has elapsed. I'm not sure how effective sound would be for this specific purpose....remember, you have lots of aging people who don't hear well, and those are precisely the people who are most likely to regularly watch a weather forecast. Not to mention those of us who have noisy kids in the background! (Lesson number 1 in programming...always remember who your users are!) Personally, for this particular example, I think you'd be far better off sticking with a visual display, although in general I agree that we need to explore the usage of other sensory perceptions to enhance visualizations. It is notable that programming for "smooth transition imagery" would be a first step towards programming those "predicted" images, which would be so nice to have! After a series of migrations connecting point A in image 1 with point A in image 2, etc, the pattern of change over several steps could be extrapolated, and used to construct a "future" image. Obviously, if we are talking about the needs of the meterologists analyzing all this data, then the answer is quite different. Depth perception, and the ability to plot data volumetrically becomes an imperative, not to mention the ability to selectively filter out or add data to any display, requiring a good 3-D query tool. (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1739) ----------------------------------------------------------------------- [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 07, 1998 06:14 AM Author: Erik Shepard (shepard@uga.edu) Subject: Comments on benefactors Deanna makes a good point. I think that often we get bogged down in the research angle (which is admittedly fun) and forget the reason that we are doing this work in the first place. Pure research is good; it allows us to gain perspective on what will work and what won't - and why. But pure research shouldn't exist in a vacuum. Ultimately, somebody somewhere is going to have to use the stuff we come up with, and we have to make sure that what we do now facilitates that later. I'm not saying abandon research. I'm saying temper it with visions toward eventual application. Given that this seminar is sponsored by the UCGIS, it is obviously going to have a research bent to it. But maybe we do need to reexamine the initiatives to se how they will eventually be applied. (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1753) ----------------------------------------------------------------------- [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 07, 1998 06:19 AM Author: Ronald William Ward (ronward@arches.uga.edu) Subject: Comments on D. Pennington's observations The main theme of D. Pennington's comments about the weather example seems to be the differing needs of technical users (meteorologists) and the public at large. While I would agree that 'techies' already use representations and visualizations which are beyond the understanding of the interested public, I see no reason why weather representation and visualization advancements should continue to be applied for the benefit of meteorologists on the one hand, and 'watered down' versions of these advancements should be presented in public forums on the other. Truly, when the public watches a three minute weather report on the network news, tha's about how much time it wants to spend getting that information. I think the misconception is arising that 3-d weather map animation will be more difficult for the public to interpret. Three points here: 1) representations allowing for 3-d dipiction of weather events could allow a more realistic visualization of what actually occurs - and this could allow for better predictive capabilities. With the help of a trained interpretor (the weather person) this better predictive effort can be passed on to the public, which, in time, would adapt to reading the 3-d weather map. 2) contrary to the idea that the public might be confused by more representative weather maps - volumetric weather visualizations might be considerably less confusing to read. In addition, much more information could be included in the three minute forcast. Let me expand on this point by giving a practical example - plate tectonics. Plate tectonics is rather the same problem in representation and visualization as is the weather. The difference between the two is that weather occurs in a volumatric space above the earth's surface, while plate tectonics is played out in a volumetric space below and on the earth's surface. I remember when I was introduced to the science of plate tectonics in high school (sometime during the late pleistocene) - well, it was a theory at the time. I struggled with trying to understand what goes on with plates and different types of crust and how it related to convection currents in the mantle - and part of the reason for my struggle was that we tried to understanding the processes by looking at 2-d cross sections and world maps of plates. I finally figured it all out...and years later I saw my first 3-d representation of the processes (from animated block diagrams of different types of plate boundaries). Now days kids in grade school see these moving block diagrams and their learning curves are that much steeper than mine was at the same age because of it. This is the sort of explanitory power an animated weather map could bring to the public. Giving credit where credit is due is also part of the problem with believing that some of these things are too technical for the viewing public. In the weather and climate labs I teach, students come in having never really attempted to interpret a weather map or make a forcast, on there own. Once the tools for doing so are presented to them, most students go through a transitional 'ah-ha' stage in their learning where they begin to see the spatial relationships, then go on to interpret weather maps without much help at all. I do not see that teaching students to read 3-d moving maps we be much different from teaching them to read 2-d maps as I do now. The public, given the opportunity, could learn to interpret and predict the waether by using 3-d animations - as long as the tools for doing so are not kept from them. Point number three: the use of sound is an important step in these matters. Once people get used to the idea of using tones and such as 'audio points-of-reference,' such representation capability could make reading 3-d moving maps even less confusing. It's inportant to keep in mind that some segments of society may not have the ability to benefit from sound to the extent that many other will, and different types of representation and visualization tools should be developed to overcome such problems. However, that a small segment of our population is blind does not mean we abandon the idea of sight as a tool for relaying information! It just and merely means that, along with the development of visual tools, we develop brail. Ron Ward (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1754) ------------------------------------------------------------------ [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 07, 1998 07:36 AM Author: Erik Shepard (shepard@uga.edu) Subject: Comments on comments I still contend, Ron, that in order for 3D data to really be useable, the user needs to have some way to control perspective (a la VR). Without that capability, given the volumes of data we are talking about (not just one theme like plates, but multiple themes like temperature, humidity, etc.), 3D is just going to be confusing. The local weather station here has attempted to incorporate 3D into their weather program (a laudable attempt). Basically what they do is swing around so that you can see cloud elevations as well as spatial coverage. But since my TV is flat, if they show me elevation, I cannot see spatial distribution and vice versa. In point of fact, their 3D display always just confuses the heck out of me. I think that some examination of 3D is in order, but simply throwing it into the mix and saying "they'll learn" is not the right approach, in my opinion. (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1759) ------------------------------------------------------------- [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 07, 1998 08:10 AM Author: Ronald William Ward (ronward@arches.uga.edu) Subject: Re: comments on comments I see what you are saying here, Erik. The kind of perspective controls you are taking about are currently available (albeit inefficient) in software programs like Surfer where you can rotate the block diagram image to the desired perspective. So what you are talking about is an interactive form of the weather channel (as in the www.weather.com website) wherein viewers can choose the image they wan to look at, then manipulate it in anyway they deem necessary to understand it. This is a great idea. You are paraphrasing me when you write, "throwing it into the mix," - this is not what I advocate. As is the case with the teaching scenarios I offered, the public can be educated by the techies as to how to interpret 3-d moving weather maps. Then at some point down the line weather people will not have to dwell on explanations on how to read the animation and can move on to explaining the content of the animation. Ron Ward (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1760) -------------------------------------------------------- [Top][Previous][Next][Print][Reply][Edit*][Move*] [Delete*] Date: October 07, 1998 10:54 AM Author: Wilmot Greene (mot@uga.edu) Subject: we'll figure it out my attempt at some synthesis on this matter,,, just as the "techie' world gets more complex so does the 'average Joe' world. What percent of the general public could read a two dimensional map 100 years ago? Technology is created, made to mimic current methods, and then slowly proceeds. We should not keep technology away from the public because it MAY be confusing, like we are overprotective parents of children with no curiosity. PhDs and high school dropouts watch the same channels. MOT (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1769) ------------------------------------------------------------------ [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 07, 1998 11:54 AM Author: Deana Pennington (penningtond@geo.orst.edu) Subject: Reply Actually, my main point was not that we should save the public from high tech presentations. When we have 3-D, by all means let's use is. My point was, simply adding 3-d is not going to solve the problem of them not knowing how to interpret what they see. And that was the original question posed by Lynn. As someone pointed out, the general public gets more and more sophisticated, and things that have to be tediously explained today will be common knowledge in the future. But it doesn't happen by making a giant leap, and expected them to go blindly after you! Actually, most non-techies have to be dragged slowly, against their will, in that direction. Guess I'll save that discussion for the "GIS and Society" forum! To sum up: for the specific example of meterological presentations, 3-d would be great; dynamic presentations are helpful; ok, even throw in tonal changes if you have to, but for the time being, all this has to interpreted for the audience. Deana (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1774) ------------------------------------------------------------------------ [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 07, 1998 12:12 PM Author: Byong-Woon Jun (bwjun@arches.uga.edu) Subject: Comments on representation of inexact spatial concepts I'd like to make comments on representation of inexact spatial concepts (eg, near, far, above) in the white paper. It seems to be true that as the use of GISs for policy analysis and decision making is rapidly increasing, there has been a urgent issue on how to represent data of varying exactness and varying degrees of reliability and then covey this additional information to the user. In other words, we need to propagate the errors to user. The inexactness in spatial database can be measured in spatial, thematic, and temporal dimensions if we need to extend 2D or 3D data representation to 4D data representation. However, I think the white paper focuses on inexactness representation in only spatial and thematic dimensions. It's about time to extend representation of inexact geographic concepts to all three dimensions. For example, time scale is also so important for analysis and modeling when we extend modifiable areal units problem (MAUP) to time dimension. I believe fuzzy representation can handle such fuzziness and imprecision in all three dimensions which are inherent in geographic observational data. Any comment? (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1776) ----------------------------------------------------------------------- [Top][Previous][Next][Print][Reply][Edit*][Move*][Delete*] Date: October 07, 1998 02:02 PM Author: James Nichols (jnichols@extension.umn.edu) Subject: Inexact Representations Extensions I'd agree that we need to consider and define useful representation methods for inexactness in ALL spatial, temporal and thematic components in developing an improved representational framework. Ignoring any of these leaves us with an unbalanced model. We need to explore or more clearly define the range of representation extension needs/options. Time representation is a component that seems especially difficult for us to conceptualize but I don't think it is analytically all that different than space or theme representations of inexactness. Similar "fuzzy" concepts apply to all of these components. A position may be near or far, a point in time may be soon or distant, a theme value may be high or low, etc. (http://forums.library.orst.edu/forums/Index.cfm?CFApp=7&Message_ID=1779)