DEPARTMENT: Hello world
Efficient sensor placement for environmental monitoring
By Marinka Zitnik, March 2014
By Marinka Zitnik, March 2014
By Ryan K. L. Ko, March 2010
Computers continue to get faster exponentially, but the computational demands of science are growing even faster. Extreme requirements arise in at least three areas.
By David P. Anderson, March 2010
Despite its promise, most cloud computing innovations have been almost exclusively driven by a few industry leaders, such as Google, Amazon, Yahoo!, Microsoft, and IBM. The involvement of a wider research community, both in academia and industrial labs, has so far been patchy without a clear agenda. In our opinion, the limited participation stems from the prevalent view that clouds are mostly an engineering and business-oriented phenomenon based on stitching together existing technologies and tools.
By Ymir Vigfusson, Gregory Chockler, March 2010
In recent years, empirical science has been evolving from physical experimentation to computation-based research. In astronomy, researchers seldom spend time at a telescope, but instead access the large number of image databases that are created and curated by the community [42]. In bioinformatics, data repositories hosted by entities such as the National Institutes of Health [29] provide the data gathered by Genome-Wide Association Studies and enable researchers to link particular genotypes to a variety of diseases.
By Gideon Juve, Ewa Deelman, March 2010
By Sumit Narayan, Chris Heiden, March 2010
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. With this pay-as-you-go model of computing, cloud solutions are seen as having the potential to both dramatically reduce costs and increase the rapidity of development of applications.
By Ramaswamy Chandramouli, Peter Mell, March 2010
At the turn of the 20th century, companies stopped generating their own power and plugged into the electricity grid. In his now famous book The Big Switch, Nick Carr analogizes those events of a hundred years ago to the tectonic shift taking place in the technology industry today.
By Guy Rosen, March 2010
By Daniel W. Goldberg, December 2009
The social Web is a set of ties that enable people to socialize online, a phenomenon that has existed since the early days of the Internet in environments like IRC, MUDs, and Usenet (e.g. 4, 12). People used these media in much the same way they do now: to communicate with existing friends and to meet new ones. The fundamental difference was the scale, scope, and diversity of participation.
By Sarita Yardi, December 2009
"Know thyself". Carved in stone in front of the Temple of Apollo at Delphi, that was the first thing people saw when they visited the Oracle to find answers. The benefits of knowing oneself are many. It fosters insight, increases self-control, and promotes positive behaviors such as exercise and energy conservation.
By Ian Li, Anind Dey, Jodi Forlizzi, December 2009
Research related to online social networks has addressed a number of important problems related to the storage, retrieval, and management of social network data. However, privacy concerns stemming from the use of social networks, or the dissemination of social network data, have largely been ignored. And with more than 250 million active Facebook (http://facebook.com) users, nearly half of whom log in at least once per day [5], these concerns can't remain unaddressed for long.
By Grigorios Loukides, Aris Gkoulalas-Divanis, December 2009
How sure are you that your friends are who they say they are? In real life, unless you are the target of some form of espionage, you can usually be fairly certain that you know whom your friends are because you have a history of shared interests and experiences. Likewise, most people can tell, just by using common sense, if someone is trying to sell them on a product, idea, or candidate. When we interact with people face-to-face, we reevaluate continuously whether something just seems off based on body language and other social and cultural cues.
By Roya Feizy, Ian Wakeman, Dan Chalmers, December 2009
Searching for information online has become an integral part of our everyday lives. However, sometimes we don't know the specific search terms to use, while other times, the specific information we're seeking hasn't been recorded online yet.
By Gary Hsieh, December 2009
While touchscreens allow extensive programmability and have become ubiquitous in today's gadgetry, such configurations lack the tactile sensations and feedback that physical buttons provide. As a result, these devices require more attention to use than their button-enabled counterparts. Still, the displays provide the ultimate interface flexibility and thus afford a much larger design space to application developers.
By Chris Harrison, Scott Hudson, September 2009
Virtual machine technology, or virtualization, is gaining momentum in the information technology community. While virtual machines are not a new concept, recent advances in hardware and software technology have brought virtualization to the forefront of IT management. Stability, cost savings, and manageability are among the reasons for the recent rise of virtualization. Virtual machine solutions can be classified by hardware, software, and operating system/containers. From its inception on the mainframe to distributed servers on x86, the virtual machine has matured and will play an increasing role in systems management.
By Jeff Daniels, September 2009
By Justin Solomon, June 2009
By Anna Ritchie, June 2009
Courier problems comprise a set of recently proposed combinatorial optimization problems, which are inspired by some novel requirements in railway wagon scheduling. In these problems, scheduling strategies of some mobile couriers are considered. These mobile couriers initially reside at some fixed location. They are assigned some duties of commodity transfer between different pairs of locations. The scenario may be static or dynamic. The motivation is to optimize the movement of the couriers over the constraint of the traversed path or the associated cost. We discuss here some varieties of the courier problems formalized on graphs and address the potential methods of their solution.
By Malay Bhattacharyya, June 2009
By Aris Gkoulalas-Divanis, Vassilios S. Verykios, June 2009
By Steve Clough, March 2009
By Daniel W. Goldberg, March 2009
This article describes a technique to visualize query results, representing purchase orders placed on Amazon.com, along a traditional 2-D scatter plot and a space-filling spiral. We integrate 3-D objects that vary their spatial placement, color, and texture properties into a visualization algorithm. This algorithm represents important aspects of a purchase order based on experimental results from human vision, computer graphics, and psychology. The resulting visual abstractions are used by viewers to rapidly and effectively explore and analyze the underlying purchase orders data.
By Amit Prakash Sawant, Christopher G. Healey, Dongfeng Chen, Rada Chirkova, March 2009
By Caio Camargo, March 2009
By William Ella, December 2008
By Joonghoon Lee, December 2008
By Cara Cocking, December 2008
Computational semantics has become an interesting and important branch of computational linguistics. Born from the fusion of formal semantics and computer science, it is concerned with the automated processing of meaning associated with natural language expressions [2]. Systems of semantic representation, hereafter referred to as semantic formalisms, exist to describe meaning underlying natural language expressions. To date, several formalisms have been defined by researchers from a number of diverse disciplines including philosophy, logic, psychology and linguistics. These formalisms have a number of different applications in the realm of computer science. For example, in machine translation a sentence could be parsed and translated into a series of semantic expressions, which could then be used to generate an utterance with the same meaning in a different language [14]. This paper presents two existing formalisms and examines their user-friendliness. Additionally, a new form of semantic representation is proposed with wide coverage and user-friendliness suitable for a computational linguist.
By Craig Thomas, December 2008
By Salik Syed, September 2008
Optimizing embedded applications using a compiler can generally be broken down into two major categories: hand-optimizing code to take advantage of a particular processor's compiler and applying built-in optimization options to proven and well-polished code. The former is well documented for different processors, but little has been done to find generalized methods for optimal sets of compiler options based on common goal criteria such as application code size, execution speed, power consumption, and build time. This article discusses the fundamental differences between these two general categories of optimizations using the compiler. Examples of common, built-in compiler options are presented using a simulated ARM processor and C compiler, along with a simple methodology that can be applied to any embedded compiler for finding an optimal set of compiler options.
By Joe Bungo, September 2008
The visual appearance of volumes of water particles, such as clouds, waterfalls, and fog, depends both on microscopic interactions between light rays and individual droplets of water, and also on macroscopic interactions between multiple droplets and paths of light rays. This paper presents a model that builds upon a typical single-scattering volume renderer to correctly account for these effects. To accurately simulate the visual appearance of a surface or a volume of particles in a computer-generated image, the properties of the material or particle must be specified using a Bidirectional Reflectance Distribution Function (BRDF), which describes how light reflects off of a material, and the Bidirectional Transmittance Distribution Function (BTDF), which describes how light refracts into a material. This paper describes an optimized BRDF and BTDF for volumes of water droplets, which takes their geometry into account in order to produce well-known effects, such as rainbows and halos. It also describes how a multiple-scattering path tracing volume integrator can be used to more accurately simulate macroscopic light transport through a volume of water, creating a more "cloudlike" appearance than a single-scattered volume integrator. This paper focuses on replicating the visual appearance of volumes of water particles, and although it makes use of physical models, the techniques presented are not intended to be physically accurate.
By James Hegarty, September 2008
By Ed DeHart, September 2008
By Sid Stamm, June 2008
This article presents an automated technique for visualizing large software architectures using multiple graphical representations, including multi-dimensional scaling, 2-D grid, and spiral layouts. We describe how our software visualization methods were applied to the Network Appliance operating system known as Data ONTAP 7G (ONTAP). We show how each method can be applied to comprehend a specific aspect of ONTAP. This approach can be used by software engineers, architects, and developers to better understand the architecture of their code.
By Amit Prakash Sawant, Naveen Bali, March 2008
By Cara Cocking, March 2008
By Leslie Sandoval, March 2008
This paper presents the results of an empirical study aimed at examining the extent to which software engineers follow a software process and the extent to which they improvise during the process. Our subjects tended to classify processes into two groups. In the first group are the processes that are formal, strict, and well-documented. In the second group are the processes that are informal and not well-structured. The classification has similar characteristics to the model proposed by Truex, Baskerville, and Travis [12]. Our first group is similar to their methodical classification, and our second group is similar to their amethodical classification. Interestingly, software engineers using a process in the second group stated that they were not using a process. We believe that software engineers who think that they are not using a process, because they have the prevalent concept of process as something methodical that is strict and structured, actually are using an informal (amethodical) process. We also found that software engineers improvise while using both types of processes in order to overcome shortcomings in the planned path which arose due to unexpected situations. This finding leads us to conclude that amethodical processes are processes too.
By Rosalva E. Gallardo-Valencia, Susan Elliott Sim, December 2007
This project visualizes a scientific dataset containing two-dimensional flow data from a simulated supernova collapse provided by astrophysics researchers. We started our project by designing visualizations using multiple hand drawings representing the flow data without taking into consideration the implementation constraints of our designs. We implemented a few of our hand drawn designs. We used an assortment of simple geometric graphical objects, called glyphs, such as, dots, lines, arrows, and triangles to represent the flow at each sample point. We also incorporated transparency in our visualizations. We identified two important goals for our project: (1) design different types of graphical glyphs to support flexibility in their placement and in their ability to represent multidimensional data elements, and (2) build an effective visualization technique that uses glyphs to represent the two-dimensional flow field.
By Amit Prakash Sawant, Christopher G. Healey, December 2007
Fans of PC role-playing games need no introduction to Bioware-the Edmonton, Alberta based developer of Baldur's Gate, Neverwinter Nights, and Jade Empire, among others. The company recently opened a studio in Austin, Texas to develop a massively multiplayer online role-playing game (MMORPG, or simply MMO) for an unannounced intellectual property. Ben Earhart, client technology lead on the new project, took a few hours out of his busy schedule to discuss with Crossroads the future of real-time rendering-3-D graphics that render fast enough to respond to user input, such as those required for video games.
By James Stewart, December 2007
Prosodic phrasing is the means by which speakers of any given language break up an utterance into meaningful chunks. The term "prosody" itself refers to the tune or intonation of an utterance, and therefore prosodic phrases literally signal the end of one tune and the beginning of another. This study uses phrase break annotations in the Aix-MARSEC corpus of spoken English as a "gold standard" for measuring the degree of correspondence between prosodic phrases and the discrete syntactic grouping of prepositional phrases, where the latter is defined via a chunk parsing rule using nltk_lite's regular expression chunk parser.
A three-way comparison is also introduced between the "gold standard" chunk parsing rule and human judgment in the form of intuitive predictions about phrasing. Results show that even with a discrete syntactic grouping and a small sample of text, problems may arise for this rule-based method due to uncategorical behavior in parts of speech. Lack of correspondence between intuitive prosodic phrases and corpus annotations highlights the optional nature of certain boundary types. Finally, there are clear indications, supported by corpus annotations, that significant prosodic phrase boundaries occur within sentences and not just at full stops.
By Claire Brierley, Eric Atwell, December 2007
By Gregory M. Zaverucha, December 2007
By Daniel Alex Finkelstein, December 2007
Static analysis tools are useful for finding common programming mistakes that often lead to field failures. However, static analysis tools regularly generate a high number of false positive alerts, requiring manual inspection by the developer to determine if an alert is an indication of a fault. The adaptive ranking model presented in this paper utilizes feedback from developers about inspected alerts in order to rank the remaining alerts by the likelihood that an alert is an indication of a fault. Alerts are ranked based on the homogeneity of populations of generated alerts, historical developer feedback in the form of suppressing false positives and fixing true positive alerts, and historical, application-specific data about the alert ranking factors. The ordering of alerts generated by the adaptive ranking model is compared to a baseline of randomly-, optimally-, and static analysis tool-ordered alerts in a small role-based health care application. The adaptive ranking model provides developers with 81% of true positive alerts after investigating only 20% of the alerts whereas an average of 50 random orderings of the same alerts found only 22% of true positive alerts after investigating 20% of the generated alerts.
By Sarah Smith Heckman, December 2007
The physiology of how the human brain recalls memories is not well understood. Neural networks have been used in an attempt to model this process.
Two types of networks have been used in several models of temporal sequence memory for simple sequences of randomly generated and also of structured patterns: auto- and hetero-associative networks. Previous work has shown that a model with coupled auto- and hetero-associative continuous attractor networks can robustly recall learned simple sequences. In this paper, we compare Hebbian learning and pseudo-inverse learning in a model for recalling temporal sequences in terms of their storage capacities. The pseudo-inverse learning method is shown to have a much higher storage capacity, making the new network model 700% more efficient by reducing calculations.
By Kate Patterson, December 2007
By Nitin Madnani, September 2007
In this study, we developed an algorithmic method to analyze late contrast-enhanced (CE) magnetic resonance (MR) images, revealing the so-called hibernating myocardium. The algorithm is based on an efficient and robust image registration algorithm. Using our method, we are able to integrate the static late CE MR image with its corresponding cardiac cine MR images, constructing cardiac motion CE MR images, which are referred to as cardiac cine CE MR images. This method appears promising as an improved cardiac viability assessment tool
By Gang Gao, Paul Cockshott, September 2007
By Deepti Singh, Frank Boland, September 2007
This design pattern is an extension of the well-known state pattern [2], which allows an object to change its behavior depending on the internal state of the object. The behavior is defined by events, whose transformation to actions depends on the object state. This pattern introduces a way to manage state actions.
By Gunther Palfinger, September 2007
By Jonathan Doyle, September 2007
By Justin Solomon, September 2007
This paper presents the core knowledge required to properly develop 2D games in Java. We describe the common pitfalls that can easily degrade graphics performance and show how we achieved impressive frames-per-second display updates when implementing Minueto, a game development framework.
By Alexandre Denault, Jörg Kienzle, March 2007
By Deian Stefan, March 2007
By Caio Camargo, December 2006
Research in proteomics has created two significant needs: the need for an accurate public database of empirically derived mass spectrum information and the need for managing the I/O and organization of mass spectrometry data in the form of files and structures. Lack of an empirically derived database limits the ability of proteomic researchers to identify and study proteins. Managing the I/O and organization of mass spectrometry data is often time-consuming due to the many fields that need to be set and retrieved. As a result, incompatibilities and inefficiencies are created by each programmer handling this in his or her own way. Until recently, storage space and computing power has been the limiting factor in developing tools to handle the vast amount of mass spectrometry information. Now the resources are available to store, organize, and analyze mass spectrometry information.The Illinois Bio-Grid Mass Spectrometry Database is a database of empirically derived tandem mass spectra of peptides created to provide researchers with an organized and searchable database of curated spectrum information to allow more accurate protein identification. The Mass Spectrometry I/O Project creates a framework that handles mass spectrometry data I/O and data organization, allowing researchers to concentrate on data analysis rather than I/O. In addition, the Mass Spectrometry I/O Project leverages several cross-platform and portability-enhancing technologies, allowing it to be utilized on a variety of hardware and operating systems.
By Eric Puryear, Jennifer Van Puymbrouck, David Sigfredo Angulo, Kevin Drew, Lee Ann Hollenbeck, Dominic Battre, Alex Schilling, David Jabon, Gregor von Laszewski, December 2006
By Nick Datzov, December 2006
In an interdisciplinary effort to model protein dependency networks, biologists measure signals from certain proteins within cells over a given interval of time. Using this time series data, the goal is to deduce protein dependency relationships. The mathematical challenges is to statistically measure correlations between given proteins over time in order to conjecture probable relationships. Biologists can then consider these relationships with more scrutiny, in order to confirm their conjectures. One algorithm for finding such relationships makes use of interpolation of the data to produce next-state functions for each protein and the Deegan-Packel Index of Power voting method to measure the strength of correlations between pairs of proteins. The algorithm was previously implemented, but limitations associated with the original language required the algorithm to be re-implemented in a more computationally efficient language. Because of the algebraic focus of the Computational Commutative Algebra language, or CoCoA, the algorithm was re-implemented in this language, and results have been produced much more efficiently. In this paper I discuss the algorithm, the CoCoA language, the implementation of the algorithm in CoCoA, and the quality of the results.
By Grey Ballard, September 2006
By Alexander Bick, September 2006
By Eric C. Rouchka, September 2006
By Chris Jordan, Oliver Baltzer, Sean Smith, August 2006
By Jarrod Trevathan, Wayne Read, August 2006
Decentralized peer-to-peer (P2P) resource sharing applications lack a centralized authority that can facilitate peer and resource look-ups and coordinate resource sharing between peers. Instead, peers directly interact and exchange resources with other peers. These systems are often open and do not regulate the entry of peers into the system. Thus, there can be malicious peers in the system who threaten others by offering Trojan horses and viruses disguised as seemingly innocent resources. Several trust-based solutions exist to address such threats; unfortunately there is a lack of design guidance on how these solutions can be integrated into a resource sharing application. In this paper, we describe how two teams of undergraduate students separately integrated XREP, a third-party reputation-based protocol for file-sharing applications, with PACE, our software architecture-based approach for decentralized trust management. This was done in order to construct trust-enabled P2P file-sharing application prototypes. Our observations have revealed that using an architecture-based approach in incorporating trust into P2P resource-sharing applications is not only feasible, but also significantly beneficial. Our efforts also demonstrate both the ease of adoption and ease of use of the PACE-based approach in constructing such trust-enabled decentralized applications.
By Girish Suryanarayana, Mamadou H. Diallo, Justin R. Erenkrantz, Richard N. Taylor, August 2006
By John C. Georgas, Eric M. Dashofy, Richard N. Taylor, August 2006
By Alexandre Borghi, Valentin David, Akim Demaille, May 2006
Programming languages have a dual role in the construction of software. The language is both our substrate (the stuff we make software from), and our tool (what we use to construct software). Program transformation (PT) deals with the analysis, manipulation and generation of software. Therefore a close relationship exists between program transformation and programming languages, to the point where the PT field has produced many domain-specific languages for manipulating programs. In this article, I will show you some interesting aspects from one of these languages : Stratego.
By Karl Trygve Kalleberg, May 2006
By Kevin Henry, May 2006
By Mike Maxim, May 2006
By Kibum Kim, December 2005
By Hossein Mobahi, Karrie G. Karahalios, December 2005
By Kayre Hylton, Mary Beth Rosson, John Carroll, Craig Ganoe, December 2005
By Elke Moritz, Thomas Wischgoll, Joerg Meyer, December 2005
By Umer Farooq, December 2005
By Cory Quammen, October 2005
By Aaron McCoy, Declan Delaney, Tomas Ward, October 2005
By Ginger Myles, October 2005
By Mark A. Cohen, October 2005
By Ching Kang Cheng, Xiaoshan Pan, October 2005
By Anh Nguyen, Tadashi Nakano, Tatsuya Suda, August 2005
By Vishakh, Nicholas Urrea, Tadashi Nakano, Tatsuya Suda, August 2005
By K. E. Oliver, August 2005
By George Athanasiou, Leandros Tassiulas, Gregory S. Yovanof, August 2005
By Premshree Pillai, August 2005
By Jarrod Trevathan, May 2005
By Nick Papanikolaou, May 2005
By Artemios G. Voyiatzis, May 2005
By Wing H. Wong, May 2005
By Zachary A. Kissel, May 2005
By George Sakkis, December 2004
By Shlomo Hershkop, Salvatore J. Stolfo, December 2004
By Nathan Dimmock, Ian Maddison, December 2004
By Naveed Ahmad, December 2003
By Ching Kang Cheng, Xiaoshan Pan, December 2003
By Ana Gil, Francisco García, December 2003
By David Stirling, Firas Al-Ali, June 2003
By Ricardo Hoar, Joanne Penner, March 2003
By Sadaf Alam, Roland Ibbett, Frederic Mallet, March 2003
By Craig Thomas, March 2003
By Eric J. Shamow, December 2002
By Zoran Constantinescu, Pavel Petrovic, December 2002
By Stephan Jätzold, August 2002
By Donald C. Bergen, Boise P. Miller, August 2002
By M. Tyler Maxwell, Kirk W. Cameron, August 2002
By Bryan Stroube, June 2002
By Tobias Butte, April 2002
By Cory Quammen, April 2002
By Vandana Pursnani, December 2001
By Tony Belpaeme, Andreas Birk, December 2001
By Bill Stevenson, July 2001
By Kostas Pentikousis, July 2001
At some point in your career, you're going to implement a computer language. You probably won't be implementing Java or C++. You may not even recognize it as a language. Truth be told, there are an awful lot of domain-specific languages, or "little languages" [7] in common use:
By John Aycock, July 2001
Each year the Association for Computing Machinery (ACM) arranges a worldwide programming contest. This contest has two rounds: the regional contests and the World Final. The teams with the best results in the regional contests advance to the World Final. The contest showcases the best programmers in the world to representatives of large companies who are looking for talent. When practicing for programming competitions, remember that all your efforts should be directed at improving your programming skills. No matter what your performance is in a contest, don't be disappointed. Success in programming contests is affected by factors other than skill, most importantly, adrenaline, luck, and the problem set of the contest. One way of getting immediate feedback on your efforts is to join the Valladolid Online Programming Practice/Contest or the online judge hosted by Ural State University (USU). Successfully solving problems increases your online ranking in the respective competitions.This article is for beginning programmers who are new to programming contests. I will discuss the common problems faced in contests, the University of Valladolid online judge, and the USU online judge. The suggestions are divided into three parts: General Suggestions, Online Contest Suggestions, and Valladolid-Specific Suggestions. Throughout this paper, please note that in real-time contests, the judges are human and in online contests, the judges are computer programs, unless otherwise noted.
By Shahriar Manzoor, July 2001
By Josiah Dykstra, July 2001
By Todd M. Manion, March 2001
By Ramakanth Subrahmanya Devarakonda, March 2001
By Dongwon Lee, Yousub Hwang, March 2001
By Sandeep Jain, December 2000
By Lourens O. Walters, P. S. Kritzinger, December 2000
By Norbert J. Kubilus, September 2000
By Theodore Chiasson, Carrie Gates, September 2000
By Kevin Henry, July 2000
By Stephanie Ludi, July 2000
By Matt Tucker, June 2000
By Stuart Patterson, June 2000
By M. Carmen Juan Lizandra, June 2000
By Mike Maxim, June 2000
By Sebastián Tyrrell, June 2000
By José H. Canós, June 2000
By Eric Scheirer, March 2000
By Kevin Fu, March 2000
By Matt Tucker, March 2000
By Jeremy Kindy, John Shuping, Patricia Yali Underhill, David John, March 2000
By Subhasis Saha, March 2000
Note from ACM Crossroads: Due to errors in the layout process for printing on paper, the version of this article in the printed magazine contained several errors (mostly related to superscripts). This HTML version is the accurate version. Please refer to this HTML version instead of the printed version and accept our apologies for any inconvenience.
By David Salomon, March 2000
By Dmitriy V. Pinskiy, Joerg Meyer, Bernd Hamann, Kenneth I. Joy, Eric Brugger, Mark Duchaineau, March 2000
By Jack Wilson, March 2000
By Kevin Fu, September 1999
By Kevin Fu, September 1999
By George Crawford, September 1999
By Rachel Pottinger, September 1999
By Michael Stricklen, Bob Cummings, Brandon Bonner, September 1999
By Wei-Mei Shyr, Brian Borowski, September 1999
By Forrest Hoffman, William Hargrove, September 1999
By Per Andersen, September 1999
By Alessio Lomusico, June 1999
By Lynellen D. S. Perry, Erika Orrick, June 1999
By Cristobal Baray, Kyle Wagner, June 1999
By Michael J. Grimley, Brian D. Monroe, June 1999
By Roberto A. Flores-Mendez, June 1999
By G. Michael Youngblood, June 1999
By Richard Swan, Anthony Wyatt, Richard Cant, Caroline Langensiepen, April 1999
The rate of improvement in microprocessor speed exceeds the rate of improvement in DRAM (Dynamic Random Access Memory) speed. So although the disparity between processor and memory speed is already an issue, downstream someplace it will be a much bigger one. Hence computer designers are faced with an increasing Processor - Memory Performance Gap [1], which now is the primary obstacle to improved computer system performance. This article examines this problem as well as its various solutions.
By Nihar R. Mahapatra, Balakrishna Venkatrao, April 1999
By Scott Lewandowski, March 1999
By George Crawford, March 1999
By Jack Wilson, March 1999
By Dimitris Lioupis, Andreas Pipis, Maria Smirli, Michael Stefanidakis, March 1999
By João M. P. Cardoso, Mário P. Vestístias, March 1999
By Shane Hart, March 1999
By Demetris G. Galatopoullos, Elias S. Manolakos, March 1999
By Kevin Fu, November 1998
By Shawn Brown, November 1998
By George Crawford, November 1998
By Jack Wilson, November 1998
By Larry Chen, November 1998
By James Richvalsky, David Watkins, November 1998
By Robert Schlaff, November 1998
By Peggy Wright, November 1998
Doctoral students often find it hard to understand at what level of productivity they should be. Through an analysis of resums of doctoral students in the Management Information Systems (MIS) field, a better understanding of what is expected of current students as compared to former students is achieved. Both conference presentations and publications in journals are examined. Finally, there is an examination of whether the quantity of publications can be related to the ranking of the school that a student attends.
By Kai Larsen, November 1998
By Lynellen D. S. Perry, September 1998
By Lynellen D. S. Perry, September 1998
This article provides a brief summary of basic layout management in the Java Abstract Window Toolkit (AWT) and is intended to serve as a foundation for more sophisticated AWT programming.
By George Crawford, September 1998
By Jack Wilson, September 1998
Advances in computing have awakened a century old teaching philosophy: learner-centered education. This philosophy is founded on the premise that people learn best when engrossed in the topic, participating in activities that motivate learning and help them to synthesize their own understanding. We consider how the object-oriented design (OOD) learning tools developed by Rosson and Carroll [5] facilitate active learning of this sort. We observed sixteen students as they worked through a set of user interaction scenarios about a blackjack game. We discuss how the features of these learning tools influenced the students' efforts to learn the basic constructs of OOD.
By Hope D. Harley, Cheryl D. Seals, Mary Beth Rosson, September 1998
By Robert Zubek, September 1998
Explanation is an important feature that needs to be integrated into software products. Early software that filled the horizontal software market (such as word processors) contained help systems. More specialized systems, known as expert systems, were developed to produce solutions that required specific domain knowledge of the problem being solved. The expert systems initially produced results that were consistent with the results produced by experts, but the expert systems only mimicked the rules the experts outlined. The decisions provided by expert systems include no justification, thus causing users to doubt the results reported by the system. If the user was dealing with a human expert, he could ask for a line of reasoning used to draw the conclusion. The line of reasoning provided by the human expert could then be inspected for discrepancies by another expert or verified in some other manner. Software systems need better explanations of how to use them and how they produce results. This will allow the users to take advantage of the numerous features being provided and increase their trust in the software product.
By Bruce A. Wooley, September 1998
By Erika Dawn Gernand, May 1998
By Marianne G. Petersen, May 1998
By Jason Hong, May 1998
By Phil Agre, May 1998
By Lynellen D. S. Perry, May 1998
By George Crawford, May 1998
By Jack Wilson, May 1998
By Randolph Chung, Lynellen D. S. Perry, April 1998
By Sharon Lauback, April 1998
By Hiroaki Kitano, Minoru Asada, Itsuki Noda, Hitoshi Matsubara, April 1998
Robotic soccer is a challenging research domain involving multiple agents that need to collaborate in an adversarial environment to achieve specific objectives. This article describes CMUnited, the team of small robotic agents that we developed to enter the RoboCup-97 competition. We designed and built the robotic agents, devised the appropriate vision algorithm, and developed and implemented algorithms for strategic collaboration between the robots in an uncertain and dynamic environment. The robots can organize themselves in formations, hold specific roles, and pursue their goals. In game situations, they have demonstrated their collaborative behaviors on multiple occasions. The robots can also switch roles to maximize the overall performance of the team. We present an overview of the vision processing algorithm which successfully tracks multiple moving objects and predicts trajectories. The paper then focuses on the agents' behaviors ranging from low-level individual behaviors to coordinated, strategic team behaviors. CMUnited won the RoboCup-97 small-robot competition at IJCAI-97 in Nagoya, Japan.
By Manuela Veloso, Peter Stone, Kwun Han, Sorin Achim, April 1998
By Todd M. Schrider, April 1998
The Java language is compiled into a platform independent bytecode format. Much of the information contained in the original source code remains in the bytecode, thus decompilation is easy. We will examine how code obfuscation can help protect Java bytecodes.
By Douglas Low, April 1998
By George Crawford, April 1998
By Jack Wilson, April 1998
By Lynellen D. S. Perry, April 1998
By John Cavazos, April 1998
By Vishal Shah, November 1997
By Brian T. Kurotsuchi, November 1997
The Java Native Interface (JNI) comes with the standard Java Development Kit (JDK) from Sun Microsystems. It permits Java programmers to integrate native code (currently C and C++) into their Java applications. This article will focus on how to make use of the JNI and will provide a few examples illustrating the usefulness of this feature. Although a native method system was included with the JDK 1.0 release, this article is concerned with the JDK 1.1 JNI which has several new features, and is much cleaner than the previous release. Also, the examples given will be specific to JDK 1.1 installed on the Solaris Operating System.
By S. Fouzi Husaini, November 1997
By George Crawford, November 1997
By John Cavazos, November 1997
By Michael A. Grasso, Mark R. Nelson, October 1997
By Jinsoo Park, October 1997
By Vineet Kapur, Douglas Troy, James Oris, October 1997
By Wayne Smith, October 1997
By Neal G. Shaw, October 1997
By Susan E. Yager, October 1997
By Hal Berghel, October 1997
By Jack Wilson, October 1997
By Thomas C. Waszak, October 1997
By John Cavazos, October 1997
By Adam Lake, May 1997
By Paul Rademacher, May 1997
Frameless Rendering (FR) is a rendering paradigm which performs stochastic temporal filtering by updating pixels in a random order, based on most recent available input data, and displaying them to the screen immediately [1]. This is a departure from frame-based approaches commonly experienced in interactive graphics. A typical interactive graphics session uses a single input state to compute an entire frame. This constrains the state to be known at the time the first pixel's value is computed. Frameless Rendering samples inputs many times during the interval which begins at the start of the first pixel's computation and ends with the last pixel's computation. Thus, Frameless Rendering performs temporal supersampling - it uses more samples over time. This results in an approximation to motion blur, both theoretically and perceptually.This paper explores this motion blur and its relationship to: camera open shutter time, current computer graphics motion-blur implementations, temporally anti-aliased images, and the Human Visual System's (HVS) motion smear quality (see 'quality' footnote) [2].Finally, we integrate existing research results to conjecture how Frameless Rendering can use knowledge of the Human Visual System's blurred retinal image to direct spatiotemporal sampling. In other words, we suggest importance sampling (see 'sampling' footnote) by prioritizing pixels for computation based on their importance to the visual system in discerning what is occurring in an interactive image sequence.
By Ellen J. Scher Zagier, May 1997
This paper covers the techniques of Polygonal Simplification in order to produce Levels of Detail (LODs). The problem of creating LODs is a complex one: how can simpler versions of a model be created? How can the approximation error be measured? How can the visual degradation be estimated? Can all this be done automatically? After exposing the basic aims and principles of polygonal simplification, we compare recent algorithms and state their various qualities and weaknesses.
By Mike Krus, Patrick Bourdot, Françoise Guisnel, Gullaume Thibault, May 1997
The increasing demands of 3D game realism - in terms of both scene complexity and speed of animation - are placing excessive strain on the current low-level, computationally expensive graphics drawing operations. Despite these routines being highly optimized, specialized, and often being implemented in assembly language or even in hardware, the ever-increasing number of drawing requests for a single frame of animation causes even these systems to become overloaded, degrading the overall performance. To offset these demands and dramatically reduce the load on the graphics subsystem, we present a system that quickly and efficiently finds a large portion of the game world that is not visible to the viewer for each frame of animation, and simply prevents it from being sent to the graphics system. We build this searching mechanism for unseen parts from common and easily implemented graphics algorithms.
By Kenneth E. Hoff, May 1997
By Phil Agre, May 1997
By Matt Cutts, May 1997
By Jack Wilson, May 1997
By John Cavazos, May 1997
By Sofia C. DeFernandez, April 1997
Contemporary computers predominantly employ graphical user interfaces (GUIS) and colour is a major componenet of the GUI. Every man-machine interface is composed of two major parts:the man and the machine [4]. Color interfaces are no different in that they are also based on two parts, the Human visual system (HVS) and a color display system. A theoretical examination of these two components establishes a foundation for developing practical guidelines for color interfaces. This paper will briefly examine theoretical aspects of both components and established techniques and tools for the effective use of color for software interface design.
By Peggy Wright, Diane Mosser-Wooley, Bruce Wooley, April 1997
By Ian MacColl, David Carrington, April 1997
By Christopher M. Smith, April 1997
By Michael A. Grasso, Tim Finin, April 1997
Virtual Reality hype is becoming a large part of everyday life. This paper explores the components of actual virtual reality systems, critiquing each in terms of human factors. The hardware and software of visual, aural, and haptic input and feedback are considered. Technical and human factor difficulties are discussed and some potential solutions are offered.
By Lynellen D. S. Perry, Christopher M. Smith, Steven Yang, April 1997
By Jack Wilson, April 1997
By John Cavazos, November 1996
By Bjorn Stabell, Ken Ronny Schouten, November 1996
By Sarah Elizabeth Burcham, November 1996
By Melissa Chaika, November 1996
By Lorrie Faith Cranor, November 1996
By Fabian Ernst, Jeroen Moelands, Seppo Pieterse, November 1996
By G. Bowden Wise, November 1996
By Jack Wilson, November 1996
By Sara Carlstead, November 1996
By Lynellen D. S. Perry, November 1996
By John Cavazos, November 1996
By Frank Klassner, September 1996
By Kentaro Toyama, Drew McDermott, September 1996
By Lynellen D. S. Perry, September 1996
By Christopher O. Jaynes, September 1996
By Joseph Beck, Mia Stern, Erik Haugsjaa, September 1996
Anytime Algorithms are algorithms that exchange execution time for quality of results. Since many computational tasks are too complicated to be completed at real-time speeds, anytime algorithms allow systems to intelligently allocate computational time resources in the most effective way, depending on the current environment and the system's goals. This article briefly covers the motivations for creating anytime algorithms, the history of their development, a definition of anytime algorithms, and current research involving anytime algorithms.
By Joshua Grass, September 1996
By G. Bowden Wise, September 1996
By Jack Wilson, September 1996
By Paul Rubel, September 1996
By Lorrie Faith Cranor, September 1996
By Michael Neuman, Diana Moore, April 1996
By Aurobindo Sundaram, April 1996
By Jason Evans, Deborah Frincke, April 1996
By Lorrie Faith Cranor, April 1996
The explosive growth of networked and internetworked computer systems during the past decade has brought about a need for increased protection mechanisms. This paper discusses three authentication protocols that incorporate the use of methods that present effective user authentication. The first two protocols have been previously discussed in the literature; the third protocol draws from the first two and others to produce an authentication scheme that provides both mutual authentication and secure key distribution which is easy to use, is compatible with present operating systems, is transparent across systems, and provides password file protection.
By Charles Cavaiani, Jim Alves-Foss, April 1996
By G. Bowden Wise, April 1996
By Jack Wilson, April 1996
By Lorrie Faith Cranor, April 1996
By George E. Hatoun, Brad Templeton, February 1996
By Dan Ghica, February 1996
By C. Fidge, February 1996
A method of illustrating program structure by showing how sections depend on each other is presented. This suggests an intuitive metric for program partitioning, which is developed with supporting theory.
By Mark Ray, February 1996
By Navid Sabbaghi, February 1996
By G. Bowden Wise, February 1996
By Lorrie Faith Cranor, February 1996
By Scott Ramsay MacDonald, February 1996
By Jack Wilson, February 1996
By Sara M. Carlstead, February 1996
By Lorrie Faith Cranor, February 1996
By Lorrie Faith Cranor, November 1995
By Jeff Robbins, November 1995
By Matt Rosenberg, November 1995
By Melissa Chaika, November 1995
By Sara M. Carlstead, November 1995
By Rob Jackson, November 1995
By Scott Ramsey MacDonald, November 1995
By G. Bowden Wise, November 1995
By Jack Wilson, November 1995
By Sara M. Carlstead, November 1995
By Lorrie Faith Cranor, November 1995
By Lorrie Faith Cranor, November 1995
By Mark Allman, September 1995
By Scott Ruthfield, September 1995
By Ben W. Brumfield, September 1995
By Jay A. Kreibich, September 1995
By Darren Bolding, September 1995
By Sarah Elizabeth Burcham, September 1995
By Jeremy Buhler, September 1995
By G. Bowden Wise, September 1995
By Lorrie Faith Cranor, Adam Lake, September 1995
By Saveen Reddy, September 1995
By Sara M. Carlstead, September 1995
By Lorrie Faith Cranor, September 1995
By Ronald B. Krisko, May 1995
By Scott Ramsey MacDonald, May 1995
By Saul Jimenez, May 1995
By Lorrie Faith Cranor, May 1995
By Adam Lake, May 1995
By Sara M. Carlstead, May 1995
By Saveen Reddy, May 1995
By G. Bowden Wise, May 1995
By Sara M. Carlstead, May 1995
By Ronald B. Krisko, May 1995
By G. Bowden Wise, February 1995
By Lorrie Faith Cranor, February 1995
By Ronald B. Krisko, February 1995
By Saveen Reddy, February 1995
By Lorrie Faith Cranor, February 1995
By Saveen Reddy, December 1994
By Lorrie Cranor, Ajay Apte, December 1994
By Shriram Krishnamurthi, December 1994
By G. Bowden Wise, December 1994
By Bradley M. Kuhn, David W. Binkley, December 1994
By Lorrie Faith Cranor, December 1994
By Terry White, December 1994
By Saveen Reddy, September 1994
By Jason Yanowitz, September 1994
By Saveen Reddy, G. Bowden Wise, September 1994
By Saveen Reddy, September 1994
By Terry White, September 1994
By Saveen Reddy, September 1994