Sunday, December 15, 2013

Scientific Computing


Computational science, or scientific computing, focuses on mathematical models and quantitative analyses to solve scientific problems. The approach here is to gain knowledge through analysis of mathematical models implemented with computers. 
Scientists at ETH Zurich, collaborating with IBM Research, Technical University of Munich and Lawrence Livermore National Laboratory have set a new record in fluid dynamic super computing using one of the fastest computer in the world, the Sequoia IBM BlueGene/Q. The team employed 13 trillion cells and reached a 14.4 Petaflop sustained performance. This is 73% of the theoretical peak! With a 200-fold improvement over previous research, it paves the way for cloud cavitation. Cloud cavitation happens when vapor cavities form in a liquid due to the pressure change. Damaging shockwaves can be created when the bubbles implode.

The simulation helped resolved many situations associated with the collapsing of bubbles, such as the shattering of kidney stones or design improving for high-pressure fuel injectors. Another area that this simulation can help improve upon is cancer treatment, specifically destroying tumorous cells or delivering drugs to a very precise location. It is pretty obvious to see how impactful this area of computer science can be for many aspects of our future.

Sunday, December 8, 2013

Computer Graphics

There term computer graphics can be described as “almost everything on computers that is not text or sound.” It refers to several different specific areas of computer science, but all with the underlying focus on the creation, representation and manipulation of visual data. There are many kinds of image types including 2D, pixel art, vector graphics, 3D, and computer animation.
The termed coined with computer graphics is rendering. When an image is drawn on a computer it is considered to be rendered. Since the term was first coined by William Fetter, in 1960, computer graphics have come a long way. Starting out as interactive interfaces on appliances, the video game and movie industry quickly realized the appeal. Over the years, renderings became more and more life-like. The texture and fluidity of water has just recently been mastered. Human skin seemed to be a tough task to simulate for a while, but I have noticed huge improvements in recent video games such as Tomb Raider.
An article regarding the most cutting edge research in computer graphics showed me just where this area of computer science is headed. The advancement made in rendering is unbelievable. Scientists  used different types of tools to help them achieve near perfection. Density functions and thermal imaging are just a few to name. Check out this video of multiple SIGGRAPH Asia papers to see just how real these simulations can be!

Sunday, November 24, 2013

Artificial Intelligence


The study of artificial intelligence, or AI, is a branch of computer science that deals with the simulation of intelligent behavior in computers. Some task require a robot, others just a program. Current AI attempts are focused on the tasks that are easy for people, yet difficult for a computer, such as vision, understanding and speaking natural language and manipulating objects. Some useful AI system that we can all recognize include Google Translate, recommendation systems on Amazon or YouTube, and ATM machines. Strong AI has even higher ambitions, attempting to build systems which are equal to people in ALL respects, presumably consciousness. The human brain is so very complex, though. It builds itself from experiencing the world and learning. This is something that has not yet been captured  by engineering.

The progress that has been made in AI, since the beginning of the 21st century is amazing. Self-driving cars, weather predicting software, investment software, smart security systems, and robotic assistants are all cutting edge technology. I am excited to see what the future holds for artificial intelligence. One particular article focusing on “creative machines”, caught my attention.  I had always thought of robots as data analyzers, making predictions and performing accordingly. Then I read about these programs that can compose original music and create original paintings. There is even talk about robots writing original novels, and how it would effect the copyright laws. The thought of a robot relieving us from the need to be creative, feels quite strange and intriguing. The advancement is quite impressive but it seems to focus only on the product. Creativity is not only appreciated by the audience. It begins with the artist creating something from emotion. The creative process can even be a type of therapy for the creator. But a computer can’t feel. It it simply mimicking creativity.

Sunday, November 10, 2013

File Sharing

File sharing is providing access to information that is digitally stored. Sharing the files can be achieved in many ways including manual sharing using removable media, centralized servers, hyperlinked documents and peer-to-peer networking. For this blog I would like to focus on peer to peer, or P2P sharing.

This type of sharing allows users to access media such as books, music, movies and more using software that searches for other computers allowing the shared access. An early P2P network that you may remember was Napster. Many more softwares popped up following  Napster’s popularity, including Bearshare and Winamp, to the current favorite BitTorrent.
An increase in bandwidth and the capabilities of residential computers are just a couple reasons why P2P networks had such a widespread adoption. The fact that this type of sharing is not illegal, also boosted it’s popularity. For the most part, this type of sharing is completely legitimate. Legal issues only arise when the file sharing contains copyrighted material. Referring to the music and film industry, copyright infringement has been a controversial and undetermined argument. Most studies concluding that file sharing has a negative effect on record sales are unofficial.

Sunday, November 3, 2013

Data Structures

Data structures refer to a way of organizing related pieces of information and storing them in files so that they may be retrieved and used efficiently. They also provide manageability for large amounts of data, such as databases or internet indexing services.  There are many different kinds of data structures and they each suit specific needs. 
An array is a type of data structure that stores many elements in a specific order, each identified by at least one array index or key. They are stored in a way such that the position of each element can be computed from its index tuple by a mathematical formula.Arrays can be expandable or may be a set length. They are among the oldest and most important data structures, being used in almost every program. They are even used to implement other data structures such as lists or strings.
A record is another type of data structure. They form a necessary base  Also, called structs or tuples, a record is a value that contains other values. They are typically indexed by names and are usually fixed numbered and sequenced, distinguishing them from arrays. Records can exists in any storage medium. Many files are usually grouped as arrays and then grouped into larger records.
A set is yet another type of data structure which is abstract. Abstract in this context, means an aggregate, or a collection of data. It can store specific values, without order and with no repeated values. Some sets are static while others are dynamic. Static, or frozen, sets do not change after the construction. Dynamic, or mutable, sets allow for insertion or deletion of elements in the set after construction.

Monday, October 28, 2013

Hacking



A hacker is someone who uses a computer to gain unauthorized access to a network. But why do hackers hack? Some may do it for the challenge and thrill of gaining access to government computers, while others may hack to obtain certain information. Others still, have malicious intent, using gained access to damage other computers. Sounds pretty sneaky, right? I guess it all depends on which side you are on. http://ethics.csc.ncsu.edu/abuse/hacking/study.php
The infamous hacker group, Anonymous, is one such organization whose side I want to be on. They are an international network of activists and "hacktivists". For companies and organizations that are targeted, Anonymous must be a terrifying threat. But for citizens who have been jaded or treated unjustly by these corporations, Anonymous can be seen as a group of heroes. They have retaliated against anti-digital piracy campaigns, tracked down internet predators and threatened Mexican drug cartels. Aside from this, they have also attacked the pentagon, threatened to shut down Facebook and waged war on Scientology. 
Anonymous has no official leadership, but rather a group of people working together to accomplish various goals. One characteristic of the group, which I admire, is the fact that their motivation always comes from an unrelenting moral stance on issues and rights. I feel as if these hackers are looking out for the greater good and give voices to those who are unable to be heard. Even if action is not taken, sometimes the threat is all that is needed. And their threat is fairly intimidating: 

Sunday, October 6, 2013

Agile

What is Agile?
Agile development is an umbrella term for methods based on iterative and incremental project management, regularly used in software development. Agile methodology is an alternative to the traditional project management methods typically used. With a coordinated and organized team, the solutions can evolve through collaboration. It helps teams respond to unpredictability with each successive "sprint". This technique encourages adaptive planning with rapid and flexible responses to change. It's conceptual framework allows the ability to see beforehand, the interactions throughout the development cycle.

How did Agile begin?
In 1970, Dr. Winston Royce published an very influential article, Managing the Development Of Large Software Systems. In his paper, he criticized sequential development and asserted that software development should not be conducted as if an assembly line. He introduced several project managing models including what we now know of as agile. At a software developers meeting in Utah, in 2001, a group published The Manifesto for Agile Software Development to define this specific approach.
Martin Fowler is widely recognized as one of the key founders of agile methods.


Advantages and Disadvantages
  • Customers are satisfied by rapid, continuous delivery.
  • Constant interaction between the customers, developers and testers is encouraged, while keeping the process and tools understated. 
  • Daily, face-to-face conversation is the best form of communication.
  • Continuous attention to technical excellence and good design.
  • Adaptability to changing environment.
  • Even late changes in requirements are welcomed
  • In case of some software deliverables, especially the large ones, it is difficult to assess the effort required at the beginning of the software development life cycle.
  • There is lack of emphasis on necessary designing and documentation.
  • The project can easily become unfocused if the customer representative is not clear with the final vision.
  • Usually only experienced programmers are capable of taking the kind of decisions required during the development process. Hence it has no place for newbie programmers, unless combined with experienced resources.

http://istqbexamcertification.com/what-is-agile-model-advantages-disadvantages-and-when-to-use-it/