Elasticity in the cloud
By David Chiu, March 2010
By David Chiu, March 2010
By Ryan K. L. Ko, March 2010
Computers continue to get faster exponentially, but the computational demands of science are growing even faster. Extreme requirements arise in at least three areas.
By David P. Anderson, March 2010
Despite its promise, most cloud computing innovations have been almost exclusively driven by a few industry leaders, such as Google, Amazon, Yahoo!, Microsoft, and IBM. The involvement of a wider research community, both in academia and industrial labs, has so far been patchy without a clear agenda. In our opinion, the limited participation stems from the prevalent view that clouds are mostly an engineering and business-oriented phenomenon based on stitching together existing technologies and tools.
By Ymir Vigfusson, Gregory Chockler, March 2010
In recent years, empirical science has been evolving from physical experimentation to computation-based research. In astronomy, researchers seldom spend time at a telescope, but instead access the large number of image databases that are created and curated by the community [42]. In bioinformatics, data repositories hosted by entities such as the National Institutes of Health [29] provide the data gathered by Genome-Wide Association Studies and enable researchers to link particular genotypes to a variety of diseases.
By Gideon Juve, Ewa Deelman, March 2010
By Sumit Narayan, Chris Heiden, March 2010
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. With this pay-as-you-go model of computing, cloud solutions are seen as having the potential to both dramatically reduce costs and increase the rapidity of development of applications.
By Ramaswamy Chandramouli, Peter Mell, March 2010
At the turn of the 20th century, companies stopped generating their own power and plugged into the electricity grid. In his now famous book The Big Switch, Nick Carr analogizes those events of a hundred years ago to the tectonic shift taking place in the technology industry today.
By Guy Rosen, March 2010
By Daniel W. Goldberg, December 2009
The social Web is a set of ties that enable people to socialize online, a phenomenon that has existed since the early days of the Internet in environments like IRC, MUDs, and Usenet (e.g. 4, 12). People used these media in much the same way they do now: to communicate with existing friends and to meet new ones. The fundamental difference was the scale, scope, and diversity of participation.
By Sarita Yardi, December 2009
"Know thyself". Carved in stone in front of the Temple of Apollo at Delphi, that was the first thing people saw when they visited the Oracle to find answers. The benefits of knowing oneself are many. It fosters insight, increases self-control, and promotes positive behaviors such as exercise and energy conservation.
By Ian Li, Anind Dey, Jodi Forlizzi, December 2009
Research related to online social networks has addressed a number of important problems related to the storage, retrieval, and management of social network data. However, privacy concerns stemming from the use of social networks, or the dissemination of social network data, have largely been ignored. And with more than 250 million active Facebook (http://facebook.com) users, nearly half of whom log in at least once per day [5], these concerns can't remain unaddressed for long.
By Grigorios Loukides, Aris Gkoulalas-Divanis, December 2009
Searching for information online has become an integral part of our everyday lives. However, sometimes we don't know the specific search terms to use, while other times, the specific information we're seeking hasn't been recorded online yet.
By Gary Hsieh, December 2009
As a student of computer science, there's a significant chance you will end up working in software development after graduation. Despite whether your career path takes you into industry or academia, you're likely to have some kind of interaction with software development companies or organizations, if only in trying to get the most out of a project or collaboration.
By Michael DiBernardo, September 2009
While touchscreens allow extensive programmability and have become ubiquitous in today's gadgetry, such configurations lack the tactile sensations and feedback that physical buttons provide. As a result, these devices require more attention to use than their button-enabled counterparts. Still, the displays provide the ultimate interface flexibility and thus afford a much larger design space to application developers.
By Chris Harrison, Scott Hudson, September 2009
Virtual machine technology, or virtualization, is gaining momentum in the information technology community. While virtual machines are not a new concept, recent advances in hardware and software technology have brought virtualization to the forefront of IT management. Stability, cost savings, and manageability are among the reasons for the recent rise of virtualization. Virtual machine solutions can be classified by hardware, software, and operating system/containers. From its inception on the mainframe to distributed servers on x86, the virtual machine has matured and will play an increasing role in systems management.
By Jeff Daniels, September 2009
By Justin Solomon, June 2009
By Anna Ritchie, June 2009
By Sumit Narayan, June 2009
Computers play an integral part in designing, modelling, optimising and managing business processes within and across companies. While Business Process Management (BPM), Workflow Management (WfM) and Business Process Reengineering (BPR) have been IT-related disciplines with a history of about three decades, there is still a lack of publications clarifying definitions and scope of basic BPM terminologies like business process, BPM versus WfM, workflow, BPR, etc. Such a myriad of similar-sounding terminologies can be overwhelming for computer scientists and computer science students who may wish to venture into this area of research. This guide aims to address this gap by providing a high level overview of the key concepts, rationale, features and the developments of BPM.
By Ryan K. L. Ko, June 2009
Courier problems comprise a set of recently proposed combinatorial optimization problems, which are inspired by some novel requirements in railway wagon scheduling. In these problems, scheduling strategies of some mobile couriers are considered. These mobile couriers initially reside at some fixed location. They are assigned some duties of commodity transfer between different pairs of locations. The scenario may be static or dynamic. The motivation is to optimize the movement of the couriers over the constraint of the traversed path or the associated cost. We discuss here some varieties of the courier problems formalized on graphs and address the potential methods of their solution.
By Malay Bhattacharyya, June 2009
By Steve Clough, March 2009
By Daniel W. Goldberg, March 2009
This article describes a technique to visualize query results, representing purchase orders placed on Amazon.com, along a traditional 2-D scatter plot and a space-filling spiral. We integrate 3-D objects that vary their spatial placement, color, and texture properties into a visualization algorithm. This algorithm represents important aspects of a purchase order based on experimental results from human vision, computer graphics, and psychology. The resulting visual abstractions are used by viewers to rapidly and effectively explore and analyze the underlying purchase orders data.
By Amit Prakash Sawant, Christopher G. Healey, Dongfeng Chen, Rada Chirkova, March 2009
By Caio Camargo, March 2009
By William Ella, December 2008
By Joonghoon Lee, December 2008
By Cara Cocking, December 2008
By David Chiu, December 2008
By Salik Syed, September 2008
The visual appearance of volumes of water particles, such as clouds, waterfalls, and fog, depends both on microscopic interactions between light rays and individual droplets of water, and also on macroscopic interactions between multiple droplets and paths of light rays. This paper presents a model that builds upon a typical single-scattering volume renderer to correctly account for these effects. To accurately simulate the visual appearance of a surface or a volume of particles in a computer-generated image, the properties of the material or particle must be specified using a Bidirectional Reflectance Distribution Function (BRDF), which describes how light reflects off of a material, and the Bidirectional Transmittance Distribution Function (BTDF), which describes how light refracts into a material. This paper describes an optimized BRDF and BTDF for volumes of water droplets, which takes their geometry into account in order to produce well-known effects, such as rainbows and halos. It also describes how a multiple-scattering path tracing volume integrator can be used to more accurately simulate macroscopic light transport through a volume of water, creating a more "cloudlike" appearance than a single-scattered volume integrator. This paper focuses on replicating the visual appearance of volumes of water particles, and although it makes use of physical models, the techniques presented are not intended to be physically accurate.
By James Hegarty, September 2008
By Ed DeHart, September 2008
By Craig Pfeifer, June 2008
By Shahriar Manzoor, June 2008
By Cara Cocking, March 2008
By Leslie Sandoval, March 2008
By Sergio Sayago, Patricia Santos, Maite Gonzalez, Míriam Arenas, Laura López, December 2007
By Rachel Gollub, December 2007
This project visualizes a scientific dataset containing two-dimensional flow data from a simulated supernova collapse provided by astrophysics researchers. We started our project by designing visualizations using multiple hand drawings representing the flow data without taking into consideration the implementation constraints of our designs. We implemented a few of our hand drawn designs. We used an assortment of simple geometric graphical objects, called glyphs, such as, dots, lines, arrows, and triangles to represent the flow at each sample point. We also incorporated transparency in our visualizations. We identified two important goals for our project: (1) design different types of graphical glyphs to support flexibility in their placement and in their ability to represent multidimensional data elements, and (2) build an effective visualization technique that uses glyphs to represent the two-dimensional flow field.
By Amit Prakash Sawant, Christopher G. Healey, December 2007
Fans of PC role-playing games need no introduction to Bioware-the Edmonton, Alberta based developer of Baldur's Gate, Neverwinter Nights, and Jade Empire, among others. The company recently opened a studio in Austin, Texas to develop a massively multiplayer online role-playing game (MMORPG, or simply MMO) for an unannounced intellectual property. Ben Earhart, client technology lead on the new project, took a few hours out of his busy schedule to discuss with Crossroads the future of real-time rendering-3-D graphics that render fast enough to respond to user input, such as those required for video games.
By James Stewart, December 2007
By Gregory M. Zaverucha, December 2007
By Daniel Alex Finkelstein, December 2007
The physiology of how the human brain recalls memories is not well understood. Neural networks have been used in an attempt to model this process.
Two types of networks have been used in several models of temporal sequence memory for simple sequences of randomly generated and also of structured patterns: auto- and hetero-associative networks. Previous work has shown that a model with coupled auto- and hetero-associative continuous attractor networks can robustly recall learned simple sequences. In this paper, we compare Hebbian learning and pseudo-inverse learning in a model for recalling temporal sequences in terms of their storage capacities. The pseudo-inverse learning method is shown to have a much higher storage capacity, making the new network model 700% more efficient by reducing calculations.
By Kate Patterson, December 2007
By Saman Amirpour Amraii, December 2007
By Justin Solomon, September 2007
By Amit Chourasia, March 2007
The advent of computers with high processing power has led to the generation of large, multidimensional collections of data. Visualization lends itself well to the challenge of exploring and analyzing these information spaces by harnessing the strengths of the human visual system. Most visualization techniques are based on the assumption that the display device has sufficient resolution, and that our visual acuity is adequate for completing the analysis tasks. However, this may not be true, particularly for specialized display devices (e.g., PDAs or large-format projection walls).
In this article, we propose to: (1) determine the amount of information a particular display environment can encode; (2) design visualizations that maximize the information they represent relative to this upper-limit; and (3) dynamically update a visualization when the display environment changes to continue to maintain high levels of information content. To our knowledge, there are no visualization systems that do this type of information addition/removal based on perceptual guidelines. However, there are systems that attempt to increase or decrease the amount of information based on some level-of-detail or zooming rules. For example, semantic zooming tags objects with "details" and adds or removes them as the user zooms in and out. Furnas's original fisheye lens system [9] used semantic details to determine how much zoom was necessary to include certain details. Thus, while zooming for detail, you see not only a more detailed graphic representation, but also more text details (e.g., more street names on the zoomed-in portion of a map). Level-of-detail hierarchies have also been used in computer graphics to reduce geometric complexity where full resolution models are unnecessary and can be replaced with low-detail models where the resulting error cannot be easily recognized. Our approach is motivated by all these ideas, but our key contribution is that we use human perception constraints to define when to add or remove information.
By Amit Prakash Sawant, Christopher G. Healey, March 2007
This paper presents the core knowledge required to properly develop 2D games in Java. We describe the common pitfalls that can easily degrade graphics performance and show how we achieved impressive frames-per-second display updates when implementing Minueto, a game development framework.
By Alexandre Denault, Jörg Kienzle, March 2007
By Deian Stefan, March 2007
By Paula Bach, Chris Jordon, December 2006
By James Stewart, December 2006
By Chris Dondanville, December 2006
By Caio Camargo, December 2006
By Damien Marshall, Tomas Ward, Séamus McLoone, December 2006
By Daniel Lewis, September 2006
By Mike Verdicchio, September 2006
By Alexander Bick, September 2006
By Bryan Stroubeé, May 2006
By Umer Farooq, May 2006
By Anh Nguyen, Tadashi Nakano, Tatsuya Suda, August 2005
By Yanru Zhang, Michael Weiss, September 2003
By Parveen Patel, December 2002
This paper presents an introduction to computer-aided theorem proving and a new approach using parallel processing to increase power and speed of this computation. Automated theorem provers, along with human interpretation, have been shown to be powerful tools in verifying and validating computer software. Destiny is a new tool that provides even greater and more powerful analysis enabling greater ties between software programs and their specifications.
By Josiah Dykstra, April 2002
By Robert Korfhage, July 2000
By Eric Scheirer, March 2000
By Kevin Fu, March 2000
By Matt Tucker, March 2000
By Jeremy Kindy, John Shuping, Patricia Yali Underhill, David John, March 2000
By Subhasis Saha, March 2000
Note from ACM Crossroads: Due to errors in the layout process for printing on paper, the version of this article in the printed magazine contained several errors (mostly related to superscripts). This HTML version is the accurate version. Please refer to this HTML version instead of the printed version and accept our apologies for any inconvenience.
By David Salomon, March 2000
By Dmitriy V. Pinskiy, Joerg Meyer, Bernd Hamann, Kenneth I. Joy, Eric Brugger, Mark Duchaineau, March 2000
By Jack Wilson, March 2000
By Kevin Fu, September 1999
By George Crawford, September 1999
By Rachel Pottinger, September 1999
By Michael Stricklen, Bob Cummings, Brandon Bonner, September 1999
By Wei-Mei Shyr, Brian Borowski, September 1999
By Forrest Hoffman, William Hargrove, September 1999
By Per Andersen, September 1999
By Alessio Lomusico, June 1999
By Cristobal Baray, Kyle Wagner, June 1999
By Michael J. Grimley, Brian D. Monroe, June 1999
By Roberto A. Flores-Mendez, June 1999
By G. Michael Youngblood, June 1999
By Scott Lewandowski, March 1999
By George Crawford, March 1999
By Jack Wilson, March 1999
By Dimitris Lioupis, Andreas Pipis, Maria Smirli, Michael Stefanidakis, March 1999
By João M. P. Cardoso, Mário P. Vestístias, March 1999
By Shane Hart, March 1999
By Demetris G. Galatopoullos, Elias S. Manolakos, March 1999
By Kevin Fu, November 1998
By Shawn Brown, November 1998
By George Crawford, November 1998
By Jack Wilson, November 1998
By Larry Chen, November 1998
By James Richvalsky, David Watkins, November 1998
By Robert Schlaff, November 1998
By Peggy Wright, November 1998
Doctoral students often find it hard to understand at what level of productivity they should be. Through an analysis of resums of doctoral students in the Management Information Systems (MIS) field, a better understanding of what is expected of current students as compared to former students is achieved. Both conference presentations and publications in journals are examined. Finally, there is an examination of whether the quantity of publications can be related to the ranking of the school that a student attends.
By Kai Larsen, November 1998
By Lynellen D. S. Perry, September 1998
This article provides a brief summary of basic layout management in the Java Abstract Window Toolkit (AWT) and is intended to serve as a foundation for more sophisticated AWT programming.
By George Crawford, September 1998
By Jack Wilson, September 1998
Advances in computing have awakened a century old teaching philosophy: learner-centered education. This philosophy is founded on the premise that people learn best when engrossed in the topic, participating in activities that motivate learning and help them to synthesize their own understanding. We consider how the object-oriented design (OOD) learning tools developed by Rosson and Carroll [5] facilitate active learning of this sort. We observed sixteen students as they worked through a set of user interaction scenarios about a blackjack game. We discuss how the features of these learning tools influenced the students' efforts to learn the basic constructs of OOD.
By Hope D. Harley, Cheryl D. Seals, Mary Beth Rosson, September 1998
By Robert Zubek, September 1998
Explanation is an important feature that needs to be integrated into software products. Early software that filled the horizontal software market (such as word processors) contained help systems. More specialized systems, known as expert systems, were developed to produce solutions that required specific domain knowledge of the problem being solved. The expert systems initially produced results that were consistent with the results produced by experts, but the expert systems only mimicked the rules the experts outlined. The decisions provided by expert systems include no justification, thus causing users to doubt the results reported by the system. If the user was dealing with a human expert, he could ask for a line of reasoning used to draw the conclusion. The line of reasoning provided by the human expert could then be inspected for discrepancies by another expert or verified in some other manner. Software systems need better explanations of how to use them and how they produce results. This will allow the users to take advantage of the numerous features being provided and increase their trust in the software product.
By Bruce A. Wooley, September 1998
By Erika Dawn Gernand, May 1998
By Marianne G. Petersen, May 1998
By Jason Hong, May 1998
By Phil Agre, May 1998
By Lynellen D. S. Perry, May 1998
By George Crawford, May 1998
By Jack Wilson, May 1998
By Randolph Chung, Lynellen D. S. Perry, April 1998
By Sharon Lauback, April 1998
By Hiroaki Kitano, Minoru Asada, Itsuki Noda, Hitoshi Matsubara, April 1998
Robotic soccer is a challenging research domain involving multiple agents that need to collaborate in an adversarial environment to achieve specific objectives. This article describes CMUnited, the team of small robotic agents that we developed to enter the RoboCup-97 competition. We designed and built the robotic agents, devised the appropriate vision algorithm, and developed and implemented algorithms for strategic collaboration between the robots in an uncertain and dynamic environment. The robots can organize themselves in formations, hold specific roles, and pursue their goals. In game situations, they have demonstrated their collaborative behaviors on multiple occasions. The robots can also switch roles to maximize the overall performance of the team. We present an overview of the vision processing algorithm which successfully tracks multiple moving objects and predicts trajectories. The paper then focuses on the agents' behaviors ranging from low-level individual behaviors to coordinated, strategic team behaviors. CMUnited won the RoboCup-97 small-robot competition at IJCAI-97 in Nagoya, Japan.
By Manuela Veloso, Peter Stone, Kwun Han, Sorin Achim, April 1998
By Todd M. Schrider, April 1998
The Java language is compiled into a platform independent bytecode format. Much of the information contained in the original source code remains in the bytecode, thus decompilation is easy. We will examine how code obfuscation can help protect Java bytecodes.
By Douglas Low, April 1998
By George Crawford, April 1998
By Jack Wilson, April 1998
By Lynellen D. S. Perry, April 1998
By John Cavazos, April 1998
By Vishal Shah, November 1997
By Brian T. Kurotsuchi, November 1997
The Java Native Interface (JNI) comes with the standard Java Development Kit (JDK) from Sun Microsystems. It permits Java programmers to integrate native code (currently C and C++) into their Java applications. This article will focus on how to make use of the JNI and will provide a few examples illustrating the usefulness of this feature. Although a native method system was included with the JDK 1.0 release, this article is concerned with the JDK 1.1 JNI which has several new features, and is much cleaner than the previous release. Also, the examples given will be specific to JDK 1.1 installed on the Solaris Operating System.
By S. Fouzi Husaini, November 1997
By George Crawford, November 1997
By John Cavazos, November 1997
By Michael A. Grasso, Mark R. Nelson, October 1997
By Jinsoo Park, October 1997
By Vineet Kapur, Douglas Troy, James Oris, October 1997
By Wayne Smith, October 1997
By Neal G. Shaw, October 1997
By Susan E. Yager, October 1997
By Hal Berghel, October 1997
By Jack Wilson, October 1997
By Thomas C. Waszak, October 1997
By John Cavazos, October 1997
By Adam Lake, May 1997
By Paul Rademacher, May 1997
Frameless Rendering (FR) is a rendering paradigm which performs stochastic temporal filtering by updating pixels in a random order, based on most recent available input data, and displaying them to the screen immediately [1]. This is a departure from frame-based approaches commonly experienced in interactive graphics. A typical interactive graphics session uses a single input state to compute an entire frame. This constrains the state to be known at the time the first pixel's value is computed. Frameless Rendering samples inputs many times during the interval which begins at the start of the first pixel's computation and ends with the last pixel's computation. Thus, Frameless Rendering performs temporal supersampling - it uses more samples over time. This results in an approximation to motion blur, both theoretically and perceptually.This paper explores this motion blur and its relationship to: camera open shutter time, current computer graphics motion-blur implementations, temporally anti-aliased images, and the Human Visual System's (HVS) motion smear quality (see 'quality' footnote) [2].Finally, we integrate existing research results to conjecture how Frameless Rendering can use knowledge of the Human Visual System's blurred retinal image to direct spatiotemporal sampling. In other words, we suggest importance sampling (see 'sampling' footnote) by prioritizing pixels for computation based on their importance to the visual system in discerning what is occurring in an interactive image sequence.
By Ellen J. Scher Zagier, May 1997
This paper covers the techniques of Polygonal Simplification in order to produce Levels of Detail (LODs). The problem of creating LODs is a complex one: how can simpler versions of a model be created? How can the approximation error be measured? How can the visual degradation be estimated? Can all this be done automatically? After exposing the basic aims and principles of polygonal simplification, we compare recent algorithms and state their various qualities and weaknesses.
By Mike Krus, Patrick Bourdot, Françoise Guisnel, Gullaume Thibault, May 1997
The increasing demands of 3D game realism - in terms of both scene complexity and speed of animation - are placing excessive strain on the current low-level, computationally expensive graphics drawing operations. Despite these routines being highly optimized, specialized, and often being implemented in assembly language or even in hardware, the ever-increasing number of drawing requests for a single frame of animation causes even these systems to become overloaded, degrading the overall performance. To offset these demands and dramatically reduce the load on the graphics subsystem, we present a system that quickly and efficiently finds a large portion of the game world that is not visible to the viewer for each frame of animation, and simply prevents it from being sent to the graphics system. We build this searching mechanism for unseen parts from common and easily implemented graphics algorithms.
By Kenneth E. Hoff, May 1997
By Phil Agre, May 1997
By Matt Cutts, May 1997
By Jack Wilson, May 1997
By John Cavazos, May 1997
By Sofia C. DeFernandez, April 1997
By Jack Wilson, April 1997
By John Cavazos, November 1996
By Bjorn Stabell, Ken Ronny Schouten, November 1996
By Sarah Elizabeth Burcham, November 1996
By Melissa Chaika, November 1996
By Lorrie Faith Cranor, November 1996
By Fabian Ernst, Jeroen Moelands, Seppo Pieterse, November 1996
By G. Bowden Wise, November 1996
By Jack Wilson, November 1996
By Sara Carlstead, November 1996
By Lynellen D. S. Perry, November 1996
By John Cavazos, November 1996
By Frank Klassner, September 1996
By Kentaro Toyama, Drew McDermott, September 1996
By Lynellen D. S. Perry, September 1996
By Christopher O. Jaynes, September 1996
By Joseph Beck, Mia Stern, Erik Haugsjaa, September 1996
Anytime Algorithms are algorithms that exchange execution time for quality of results. Since many computational tasks are too complicated to be completed at real-time speeds, anytime algorithms allow systems to intelligently allocate computational time resources in the most effective way, depending on the current environment and the system's goals. This article briefly covers the motivations for creating anytime algorithms, the history of their development, a definition of anytime algorithms, and current research involving anytime algorithms.
By Joshua Grass, September 1996
By G. Bowden Wise, September 1996
By Jack Wilson, September 1996
By Paul Rubel, September 1996
By Lorrie Faith Cranor, September 1996
By Michael Neuman, Diana Moore, April 1996
By Aurobindo Sundaram, April 1996
By Jason Evans, Deborah Frincke, April 1996
By Lorrie Faith Cranor, April 1996
The explosive growth of networked and internetworked computer systems during the past decade has brought about a need for increased protection mechanisms. This paper discusses three authentication protocols that incorporate the use of methods that present effective user authentication. The first two protocols have been previously discussed in the literature; the third protocol draws from the first two and others to produce an authentication scheme that provides both mutual authentication and secure key distribution which is easy to use, is compatible with present operating systems, is transparent across systems, and provides password file protection.
By Charles Cavaiani, Jim Alves-Foss, April 1996
By G. Bowden Wise, April 1996
By Jack Wilson, April 1996
By Lorrie Faith Cranor, April 1996
By George E. Hatoun, Brad Templeton, February 1996
By Dan Ghica, February 1996
By C. Fidge, February 1996
A method of illustrating program structure by showing how sections depend on each other is presented. This suggests an intuitive metric for program partitioning, which is developed with supporting theory.
By Mark Ray, February 1996
By Navid Sabbaghi, February 1996
By G. Bowden Wise, February 1996
By Lorrie Faith Cranor, February 1996
By Scott Ramsay MacDonald, February 1996
By Jack Wilson, February 1996
By Sara M. Carlstead, February 1996
By Lorrie Faith Cranor, February 1996
By Lorrie Faith Cranor, November 1995
By Jeff Robbins, November 1995
By Matt Rosenberg, November 1995
By Melissa Chaika, November 1995
By Sara M. Carlstead, November 1995
By Rob Jackson, November 1995
By Scott Ramsey MacDonald, November 1995
By G. Bowden Wise, November 1995
By Jack Wilson, November 1995
By Sara M. Carlstead, November 1995
By Lorrie Faith Cranor, November 1995
By Lorrie Faith Cranor, November 1995
By Mark Allman, September 1995
By Scott Ruthfield, September 1995
By Ben W. Brumfield, September 1995
By Jay A. Kreibich, September 1995
By Darren Bolding, September 1995
By Sarah Elizabeth Burcham, September 1995
By Jeremy Buhler, September 1995
By G. Bowden Wise, September 1995
By Lorrie Faith Cranor, Adam Lake, September 1995
By Saveen Reddy, September 1995
By Sara M. Carlstead, September 1995
By Lorrie Faith Cranor, September 1995
By Ronald B. Krisko, May 1995
By Scott Ramsey MacDonald, May 1995
By Saul Jimenez, May 1995
By Lorrie Faith Cranor, May 1995
By Adam Lake, May 1995
By Sara M. Carlstead, May 1995
By Saveen Reddy, May 1995
By G. Bowden Wise, May 1995
By Sara M. Carlstead, May 1995
By Ronald B. Krisko, May 1995
By Lorrie Faith Cranor, February 1995
By Ronald B. Krisko, February 1995
By Saveen Reddy, February 1995
By Lorrie Faith Cranor, February 1995
By Saveen Reddy, December 1994
By Terry White, December 1994
By Lorrie Faith Cranor, December 1994
By Saveen Reddy, September 1994
By Saveen Reddy, September 1994
By Terry White, September 1994
By Saveen Reddy, September 1994