Elasticity in the cloud
By David Chiu, March 2010
By David Chiu, March 2010
By Ryan K. L. Ko, March 2010
Computers continue to get faster exponentially, but the computational demands of science are growing even faster. Extreme requirements arise in at least three areas.
By David P. Anderson, March 2010
Despite its promise, most cloud computing innovations have been almost exclusively driven by a few industry leaders, such as Google, Amazon, Yahoo!, Microsoft, and IBM. The involvement of a wider research community, both in academia and industrial labs, has so far been patchy without a clear agenda. In our opinion, the limited participation stems from the prevalent view that clouds are mostly an engineering and business-oriented phenomenon based on stitching together existing technologies and tools.
By Ymir Vigfusson, Gregory Chockler, March 2010
In recent years, empirical science has been evolving from physical experimentation to computation-based research. In astronomy, researchers seldom spend time at a telescope, but instead access the large number of image databases that are created and curated by the community [42]. In bioinformatics, data repositories hosted by entities such as the National Institutes of Health [29] provide the data gathered by Genome-Wide Association Studies and enable researchers to link particular genotypes to a variety of diseases.
By Gideon Juve, Ewa Deelman, March 2010
By Sumit Narayan, Chris Heiden, March 2010
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. With this pay-as-you-go model of computing, cloud solutions are seen as having the potential to both dramatically reduce costs and increase the rapidity of development of applications.
By Ramaswamy Chandramouli, Peter Mell, March 2010
At the turn of the 20th century, companies stopped generating their own power and plugged into the electricity grid. In his now famous book The Big Switch, Nick Carr analogizes those events of a hundred years ago to the tectonic shift taking place in the technology industry today.
By Guy Rosen, March 2010
By Daniel W. Goldberg, December 2009
The social Web is a set of ties that enable people to socialize online, a phenomenon that has existed since the early days of the Internet in environments like IRC, MUDs, and Usenet (e.g. 4, 12). People used these media in much the same way they do now: to communicate with existing friends and to meet new ones. The fundamental difference was the scale, scope, and diversity of participation.
By Sarita Yardi, December 2009
"Know thyself". Carved in stone in front of the Temple of Apollo at Delphi, that was the first thing people saw when they visited the Oracle to find answers. The benefits of knowing oneself are many. It fosters insight, increases self-control, and promotes positive behaviors such as exercise and energy conservation.
By Ian Li, Anind Dey, Jodi Forlizzi, December 2009
Research related to online social networks has addressed a number of important problems related to the storage, retrieval, and management of social network data. However, privacy concerns stemming from the use of social networks, or the dissemination of social network data, have largely been ignored. And with more than 250 million active Facebook (http://facebook.com) users, nearly half of whom log in at least once per day [5], these concerns can't remain unaddressed for long.
By Grigorios Loukides, Aris Gkoulalas-Divanis, December 2009
Searching for information online has become an integral part of our everyday lives. However, sometimes we don't know the specific search terms to use, while other times, the specific information we're seeking hasn't been recorded online yet.
By Gary Hsieh, December 2009
As a student of computer science, there's a significant chance you will end up working in software development after graduation. Despite whether your career path takes you into industry or academia, you're likely to have some kind of interaction with software development companies or organizations, if only in trying to get the most out of a project or collaboration.
By Michael DiBernardo, September 2009
While touchscreens allow extensive programmability and have become ubiquitous in today's gadgetry, such configurations lack the tactile sensations and feedback that physical buttons provide. As a result, these devices require more attention to use than their button-enabled counterparts. Still, the displays provide the ultimate interface flexibility and thus afford a much larger design space to application developers.
By Chris Harrison, Scott Hudson, September 2009
Virtual machine technology, or virtualization, is gaining momentum in the information technology community. While virtual machines are not a new concept, recent advances in hardware and software technology have brought virtualization to the forefront of IT management. Stability, cost savings, and manageability are among the reasons for the recent rise of virtualization. Virtual machine solutions can be classified by hardware, software, and operating system/containers. From its inception on the mainframe to distributed servers on x86, the virtual machine has matured and will play an increasing role in systems management.
By Jeff Daniels, September 2009
By Justin Solomon, June 2009
By Anna Ritchie, June 2009
By Sumit Narayan, June 2009
Computers play an integral part in designing, modelling, optimising and managing business processes within and across companies. While Business Process Management (BPM), Workflow Management (WfM) and Business Process Reengineering (BPR) have been IT-related disciplines with a history of about three decades, there is still a lack of publications clarifying definitions and scope of basic BPM terminologies like business process, BPM versus WfM, workflow, BPR, etc. Such a myriad of similar-sounding terminologies can be overwhelming for computer scientists and computer science students who may wish to venture into this area of research. This guide aims to address this gap by providing a high level overview of the key concepts, rationale, features and the developments of BPM.
By Ryan K. L. Ko, June 2009
Courier problems comprise a set of recently proposed combinatorial optimization problems, which are inspired by some novel requirements in railway wagon scheduling. In these problems, scheduling strategies of some mobile couriers are considered. These mobile couriers initially reside at some fixed location. They are assigned some duties of commodity transfer between different pairs of locations. The scenario may be static or dynamic. The motivation is to optimize the movement of the couriers over the constraint of the traversed path or the associated cost. We discuss here some varieties of the courier problems formalized on graphs and address the potential methods of their solution.
By Malay Bhattacharyya, June 2009
By Steve Clough, March 2009
By Daniel W. Goldberg, March 2009
This article describes a technique to visualize query results, representing purchase orders placed on Amazon.com, along a traditional 2-D scatter plot and a space-filling spiral. We integrate 3-D objects that vary their spatial placement, color, and texture properties into a visualization algorithm. This algorithm represents important aspects of a purchase order based on experimental results from human vision, computer graphics, and psychology. The resulting visual abstractions are used by viewers to rapidly and effectively explore and analyze the underlying purchase orders data.
By Amit Prakash Sawant, Christopher G. Healey, Dongfeng Chen, Rada Chirkova, March 2009
By Caio Camargo, March 2009
By William Ella, December 2008
By Joonghoon Lee, December 2008
By Cara Cocking, December 2008
By David Chiu, December 2008
By Salik Syed, September 2008
The visual appearance of volumes of water particles, such as clouds, waterfalls, and fog, depends both on microscopic interactions between light rays and individual droplets of water, and also on macroscopic interactions between multiple droplets and paths of light rays. This paper presents a model that builds upon a typical single-scattering volume renderer to correctly account for these effects. To accurately simulate the visual appearance of a surface or a volume of particles in a computer-generated image, the properties of the material or particle must be specified using a Bidirectional Reflectance Distribution Function (BRDF), which describes how light reflects off of a material, and the Bidirectional Transmittance Distribution Function (BTDF), which describes how light refracts into a material. This paper describes an optimized BRDF and BTDF for volumes of water droplets, which takes their geometry into account in order to produce well-known effects, such as rainbows and halos. It also describes how a multiple-scattering path tracing volume integrator can be used to more accurately simulate macroscopic light transport through a volume of water, creating a more "cloudlike" appearance than a single-scattered volume integrator. This paper focuses on replicating the visual appearance of volumes of water particles, and although it makes use of physical models, the techniques presented are not intended to be physically accurate.
By James Hegarty, September 2008
By Ed DeHart, September 2008
By Craig Pfeifer, June 2008
By Shahriar Manzoor, June 2008
By Cara Cocking, March 2008
By Leslie Sandoval, March 2008
By Sergio Sayago, Patricia Santos, Maite Gonzalez, Míriam Arenas, Laura López, December 2007
By Rachel Gollub, December 2007
This project visualizes a scientific dataset containing two-dimensional flow data from a simulated supernova collapse provided by astrophysics researchers. We started our project by designing visualizations using multiple hand drawings representing the flow data without taking into consideration the implementation constraints of our designs. We implemented a few of our hand drawn designs. We used an assortment of simple geometric graphical objects, called glyphs, such as, dots, lines, arrows, and triangles to represent the flow at each sample point. We also incorporated transparency in our visualizations. We identified two important goals for our project: (1) design different types of graphical glyphs to support flexibility in their placement and in their ability to represent multidimensional data elements, and (2) build an effective visualization technique that uses glyphs to represent the two-dimensional flow field.
By Amit Prakash Sawant, Christopher G. Healey, December 2007
Fans of PC role-playing games need no introduction to Bioware-the Edmonton, Alberta based developer of Baldur's Gate, Neverwinter Nights, and Jade Empire, among others. The company recently opened a studio in Austin, Texas to develop a massively multiplayer online role-playing game (MMORPG, or simply MMO) for an unannounced intellectual property. Ben Earhart, client technology lead on the new project, took a few hours out of his busy schedule to discuss with Crossroads the future of real-time rendering-3-D graphics that render fast enough to respond to user input, such as those required for video games.
By James Stewart, December 2007
By Gregory M. Zaverucha, December 2007
By Daniel Alex Finkelstein, December 2007
The physiology of how the human brain recalls memories is not well understood. Neural networks have been used in an attempt to model this process.
Two types of networks have been used in several models of temporal sequence memory for simple sequences of randomly generated and also of structured patterns: auto- and hetero-associative networks. Previous work has shown that a model with coupled auto- and hetero-associative continuous attractor networks can robustly recall learned simple sequences. In this paper, we compare Hebbian learning and pseudo-inverse learning in a model for recalling temporal sequences in terms of their storage capacities. The pseudo-inverse learning method is shown to have a much higher storage capacity, making the new network model 700% more efficient by reducing calculations.
By Kate Patterson, December 2007
By Saman Amirpour Amraii, December 2007
By Justin Solomon, September 2007
By Amit Chourasia, March 2007
The advent of computers with high processing power has led to the generation of large, multidimensional collections of data. Visualization lends itself well to the challenge of exploring and analyzing these information spaces by harnessing the strengths of the human visual system. Most visualization techniques are based on the assumption that the display device has sufficient resolution, and that our visual acuity is adequate for completing the analysis tasks. However, this may not be true, particularly for specialized display devices (e.g., PDAs or large-format projection walls).
In this article, we propose to: (1) determine the amount of information a particular display environment can encode; (2) design visualizations that maximize the information they represent relative to this upper-limit; and (3) dynamically update a visualization when the display environment changes to continue to maintain high levels of information content. To our knowledge, there are no visualization systems that do this type of information addition/removal based on perceptual guidelines. However, there are systems that attempt to increase or decrease the amount of information based on some level-of-detail or zooming rules. For example, semantic zooming tags objects with "details" and adds or removes them as the user zooms in and out. Furnas's original fisheye lens system [9] used semantic details to determine how much zoom was necessary to include certain details. Thus, while zooming for detail, you see not only a more detailed graphic representation, but also more text details (e.g., more street names on the zoomed-in portion of a map). Level-of-detail hierarchies have also been used in computer graphics to reduce geometric complexity where full resolution models are unnecessary and can be replaced with low-detail models where the resulting error cannot be easily recognized. Our approach is motivated by all these ideas, but our key contribution is that we use human perception constraints to define when to add or remove information.
By Amit Prakash Sawant, Christopher G. Healey, March 2007
This paper presents the core knowledge required to properly develop 2D games in Java. We describe the common pitfalls that can easily degrade graphics performance and show how we achieved impressive frames-per-second display updates when implementing Minueto, a game development framework.
By Alexandre Denault, Jörg Kienzle, March 2007
By Deian Stefan, March 2007
By Paula Bach, Chris Jordon, December 2006