October is the breast cancer awareness month. Cancer is classified as a genetic disease caused by the abnormal cell division that destroy body tissue. Wait! Cell? Body tissue? Disease? So now you might be wondering: what does this have to do with computers?
In fact, cancer research has been in the heart of life sciences for the past few decades. Since genetics play an important role in most cancers, computational methods are crucial in understanding the development of the disease as well as predicting the results of clinical trials for treatment. That’s where computer science comes into action.
Before we define the computational problem, let’s review some biology from high school and learn some facts about cancer.
Our human body consists of trillions of cells. Although each cell has the exact same DNA all over your body, every cell carries out its own function. The DNA is a long sequence of nucleotides preserved inside a cell nucleus.
Photo adapted from: The Fatal Lover, Mata Hari (2016) watch online 1080
To introduce you to Exascale computing, as well as its challenges, we interviewed the distinguished Professor Jack Dongarra (University of Tennessee), an internationally renowned expert in high-performance computing and the leading scientist behind the TOP500 (http://www.top500.org/), a list which ranks supercomputers according to their performance.
When a scientific experiment achieves the expected result, researchers hurry up to draft a manuscript, submit it and cross their fingers for acceptance. When that paper gets accepted for publication – a happy camper! However, and not much later, the researchers discover that it was a fool’s paradise. Their work never gets cited by peers. Often times, simply because others cannot reproduce their scientific experiment, i.e. they cannot compare it to their own experiments. There are few reasons that block research reproducibility. In this post, I will preview some of them that frequently appear in the field of computational science. Continue reading
In software engineering the “big data” catchphrase refers to in-homogeneous large-scale data that can stem from all software development cycles. Such data can be: source code, software bugs and errors, system logs, commits, issues from backtracking systems, discussion threads from consulting sites (e.g. stackoverflow.com), emails from mailing-lists, as well as developers’ demographic data and characteristics and user requirements and reviews. Software engineering can benefit from the aforementioned data in many ways, but there are several challenges regarding the handling of such data.