Tuesday, March 29, 2011

Technology Used in Japan’s Earthquake and Tsunami

On March 11, 2011, the Northeastern part of Japan was devastated by an 8.9 magnitude earthquake. The initial quake was followed by approximately fifty aftershocks and a tsunami. The earthquake and tsunami damaged the Fukushima Daiichi nuclear power plant. After three reactors exploded at the power plant, citizens within a twenty-five mile radius are advised to stay indoors to prevent radiation exposure. Officials say over 8, 600 people have died and 12, 900 people are still missing.

Japan has the world’s most advanced earthquake early-warning system. There are two types of wave lengths that estimate the distance to the epicenter. The first is a P-wave which does minimal damage. The second is an S-wave which does more extensive damage. The difference between the arrival of the P-wave and the S-wave can be used to estimate the distance to the epicenter. The seismograph can detect warnings about two minutes before the shaking begins. This is just enough time for people to take cover, slow down high speed trains, shut off gas lines, exit  elevators, pull over to the side of the road, and for doctors to stop performing surgery.

The DART system (Deep-ocean Assessment and Reporting of Tsunami system) was used to monitor the tsunami. There is a pressure recorder anchored to the seafloor and a buoy on the surface of the water. Information form the pressure recorder is transmitted to the buoy on the surface of the water. The buoy then sends the information to a satellite that communicates with a control station.

Although the DART system can detect tsunami’s, there are still issues with the late warnings and reliability. DART systems were designed to last for four years, but since they are located in a harsh environment, they barely even last one year. If one of the DART systems becomes inoperable, there is no coverage for the area where the buoy is located which causes more lives to be lost because they do not have enough warning.  

Frugal Google

           Google was founded in January of 1996 by two men, by the names of Larry Page and Sergey Brin, who were attending Stanford University in California. In 1996, Page and Brin began developing a search engine called BackRub. After this search engine started taking up too much bandwidth at Stanford University, Page and Brin decided to rename the search engine calling it Google (the number one followed by 100 zero’s). They decided to come up with the name  Google to have an infinite amount of information on the internet.

In August of 1998, after Andy Bechtolsheim, co-founder of Sun Microsystems, gave Page and Brin a cheque for $100,000, the two men opened up a bank account and deposited their cheque. In 2000, Google began selling text-based advertisements at five cents per click. In 2007, the word “google” was added to the Merriam Webster Collegiate Dictionary and the Oxford English Dictionary.

Ninety-nine percent of Google’s revenue comes from web advertising services called AdWords and AdSense. Advertisers submit ads to Google that include a list of keywords relating to the product, service, or business. When someone uses one or more of the advertisement’s key words, the ad appears in a sidebar. Every time someone clicks on the ad, the advertiser pays Google. With AdSense, a webmaster puts ads into his/her own site. Google’s spiders crawl the site and analyze the content. Google then selects key words relevant to the webmasters site. Every time someone clicks on an ad on the webmaster’s site, the webmaster receives a portion of the ad revenue and Google gets the rest. It is because of advertising that Google made $29 billion of revenue in 2010.

Google Checkout is designed to make online purchases easier. When someone visits a store that subscribes to Google Checkout, he/she can click on the checkout option and Google will help with the transaction. Google charges a two percent plus twenty cent fee per transaction.

Google processes over one billion search requests every day.

Gesture Based Computing

Gesture-based computing refers to the ability to interface with devices through natural human movement. Different devices can recognize a variety of Gesture-based inputs. Touch screens, such as the I-pod touch, recognize multiple fingers/touch. Game Systems, such as a Nintendo Wii, senses your movement with a hand-remote and infrared sensor. Hands-Free Systems, such as the Xbox Kinect, sense your movements using a set of cameras.

The possibilities surrounding gesture-based computing are amazing. Take the design of a building, for example. My aunt is in a wheel chair. She was asked, along with a little person to tour a new facility to see how accessible it was for persons with disabilities.

Some of the things that my aunt and the other person pointed out on their tour, was that one ramp was too steep, some of the door knobs were too difficult for my aunt to open open, the sinks were too high for the little person, and so on. The brand-new building had to be renovated, before it even officially opened, for better accessibility. How much money would have been saved if my aunt and the other person could have done a virtual tour using gesture-based computing?

I believe that someone will come up with something that will allow a user to control everything in his or her house using a verbal command or a simple command such as a clap of the hands.  

The possibilities for teaching and learning are endless. Already, medical students use simulations that teach how to use certain instruments through gesture-based interfaces. In classrooms, chalkboards have been replaced with interactive white boards. I believe that, in the future, there will be much more virtual education.

    I believe that in the near future, our houses and cars will be controlled using verbal commands or a other simple commands. For example, we may be able to tell our cars to lock the doors and, based on voice-recognition software, it will be able to do this without the need for keys.

Interaction, Zoom Data, and Crowd Sourcing

    Human-Computer Interaction involves computer and behavioral sciences, design and other fields of study. HCI studies people and computers together. The goal of HCI is to improve the relationship between people and their computers. HCI is a very complicated process because it involves all aspects of the computer including both hardware and software. It also very much involves human studies such as behavior, psychology, ergonomics, languages and a whole range of other things. It’s pretty obvious how HCI can benefit anyone who uses a computer. The more user-friendly anything is, the more happy the person using it becomes.


    Zoom Data allows the user to look at a picture or an image from a distance and then get closer and closer and more detailed without needing to continually click to find more information.  Google Earth would be one of the more common uses of Zoom Data. The Zoom Data video explains more about how Zoom Data works. Zoom Data will benefit me because it will be easier to keep zooming in to find information instead of searching numerous websites.

    Crowd Sourcing has been defined by Wikipedia as, “the act of outsourcing tasks, traditionally performed by an employee or contractor, to an undefined, large group of people or community (a crowd), through an open call.” Probably the most famous example of crowd sourcing is Wikipedia. Anyone can, and is encouraged to, contribute to Wikipedia. There can be problems with crowd sourcing as Wikipedia has discovered. A couple of years ago, a contributor changed the description of Paul Martin from “the 21st Prime Minister”, to “the worst Prime Minister.” Apparently, it took a few days until this deliberate misinformation was caught by Wikipedia’s editors.

Crowd Sourcing will benefit me because I can type key words into the search engine of Google or Wikipedia and millions of websites about the topic will pop up.