The Marriage Supper Parable - A Cartoon with Sound Effects, Music, and Scripture - A Teaching of Jesus in Matthew 22

(See the PDF file here: https://drive.google.com/file/d/1-l2aOUz80mv-LIyzzPpySegRgZDMaDLT/view?usp=sharing .) The Marriage...

Thursday, March 22, 2012

The Harmful Effects of Too Much Video Game Playing

Ever since Computer Space came out in 1971, video games have become more and more realistic and life-like. Graduating from primitive 2D graphics to life-like 3D environments, computer games have had an impact on kids, and adults, for more than forty years. While there is some controversy about whether computer games are beneficial or harmful, the impact they have on people is well documented and studies conducted over the years have brought up some interesting results.

The Effects of Violent Video Games

Many studies have been undertaken to see how violent video games effect children. One such study shows that there is a connection between violent video games and aggressive behavior.  Dr. Craig A. Anderson, Ph.D., of the Iowa State University in Ames has conducted a study on the gaming habits of over a thousand children from Japan and the United States (7). The study has revealed some startling facts about the effects of video games on children.

 During the course of the experiment, the children’s video game habits were studied, along with their behavior. At the start, the behavior of the children--whether they were more passive or aggressive--was taken into account. The participants rated their own behavior, but Anderson’s team also gathered information from their peers and teachers (7). Anderson and his colleagues concluded that children who played violent video games on a regular basis were more aggressive than their peers who rarely or never played such games (7).

 Anderson writes in an article for the American Psychological Association: "High levels of violent video game exposure have been linked to delinquency, fighting at school and during free play periods, and violent criminal behavior (e.g., self-reported assault, robbery)” (1).

 Others experts disagree with Anderson’s research, claiming that violence in general is not the problem with video games. Dr. Cheryl K. Olson, a director of the Center for Mental Health and the Media at Massachusetts General Hospital in Boston is quoted by CNN as saying, “I think there may well be problems with some kinds of violent games for some kinds of kids. We may find things we should be worried about, but right now we don't know enough” (7).

While there is controversy over whether all genres of violent video games are harmful, research regarding the effects of violent video games on the brain has brought up some startling results. According to a fairly recent study by Dr. Vincent Matthews of Indiana University, computer games have an impact on the brain. A number of participants, ages 18 to 29,  were randomly chosen to play a Mature-rated, first-person shooter game (13). The other group played no computer games during the study. None of the participants had much experience playing computer games and all were men (9).

 In the first part of the experiment conducted by Matthews, the subjects were given counting and emotional tests to complete while their brains were scanned by a functional MRI machine. During the first week, the group assigned the video game played it for 10 hours on average (13).

 After one week had passed, both groups were given counting and emotional tests and scanned again by the functional MRI machine. The scans revealed that the game-playing group had less activity in the areas of their brains where attention, emotions, and the restraint of impulses are centered (9). The group that played the Mature-rated game did not respond to emotional content as they had during the first test (13). A test that involved counting revealed that the video game group had decreased activity in the areas of their brains where concentration and attention are controlled (9).

 During the second week, neither group played video games. The testing that took place that the end of the second week revealed that the video game group had improved (13) in their performance compared to the previous test, but had not quite reached the results of their first test, prior to their playing the game. The control group, which had played no video games, performed the same, as expected.

 Matthews is quoted in an article on Time.com as saying, “Behavioral studies have shown an increase in aggressive behavior after violent video games, and what we show is the physiological explanation for what the behavioral studies are showing. We’re showing that there are changes in brain function that are likely related to that behavior” (9).

 The activity of the brain is not the only thing to be affected in the brain. Dr. Paul Grasby of the Hammersmith Hospital in London obtained evidence that playing video games can become chemically addictive. He and his fellow researchers learned that the production of dopamine doubles in the brain of a person who consistently plays video games (15). The researchers found that the dopamine produced had the same effect as amphetamines or Ritalin injected into the blood stream (15). Dopamine is considered to be a hormone associated with pleasure (4) and Grasby’s team has compared video games to taking “a dose of speed” (15).

 Video Game Addiction

WOW in the Future
This dopamine craving makes sense in light of the obsession of many with massive multiplayer online role-playing games (MMORPGs). Games such as World of Warcraft and EverQuest bring the player into a fantasy world where he or she can interact with thousands of other online players and complete missions in complex maps. Each player controls a character he or she can use to gain advancements and special abilities. Most players of such games become completely engrossed in his or her game. A website called gamerwidow.com has started that is for “gamer widows” to “come and share their frustrations with their fellow gamer widow(er)s, and discuss their feelings and develop a [camaraderie] that only those in their positions understand” (6).

(Click below to read more.)

Tuesday, March 6, 2012

Amazing Future Computer Technology


Millions of people around the world use them every day. Nearly every company and many jobs depend on it. Cars, ipods, cell phones, cruise ships, satellites, and many other gadgets would not work without it. Practically anything electronic that is in use today uses it. Computers have changed the way we live, and, now, most of us wouldn’t know what to do without them. These electronic computing machines have advanced a lot since their beginning in the 1940s. At that time, one computer filled up a room and had a small fraction of the computing power of an iphone. Yes, computers have come quite far from their humble origins, but if computers in the future improve as researchers say they will, the improvements since the 1940s will be nothing in comparison. The developments and concepts for the future of computer technology are really quite amazing. Before we look at the future of computers, let’s take a quick glimpse at the past.

In 1938, Konrad Zuse invents the Z1 Computer. This primitive machine is an early binary digital computer. Unlike later computers, his invention was not capable of memory storage. The United States military created the ENIAC (Electronic Numerical Integrator and Computer) in 1945 (4). This early computer, like modern computers, could store and save data. A few decades later, in 1970, the first RAM (random-access memory) chip and the first microprocessor, the Intel 4004, came into existence, replacing vacuum-tube technology (4). A year earlier, the military developed a network that would be the origin of the internet: ARPANET. ARPANET is an acronym for Advanced Research Projects Agency Network. It was a network of computers meant to share the burden of computing so that one computer, with the help of a few others in the network, could perform its computations faster (12). It is also thought to be a network for preserving information in case of a nuclear attack. 

Twenty-two years later, in 1991, the World Wide Web was available to the public (4). According to James Coates of the Chicago Tribune, during the “Great Holiday Blowout of 1995,…more people bought personal computers than ever in history” (3). During the ‘90s computers became more commonplace and websites multiplied in number.

Since the internet boom of the ‘90s, computers able to access the internet have shrunk so small that they can easily fit in a pocket. The iphone and other smartphones can access the internet, snap and save pictures, record video, and perform most of the functions of a desktop computer. The invention of touch-sensitive screens was also a big improvement and was necessary for smart phones. The smart-phone market has seen even the search engine company, Google, adding their own phone, the Nexus One.

Google’s CEO, Eric Schmidt, has some interesting opinions about the future of computing. He believes that a computer 50 years from now will have a “computational capability that is just so free and so amazing that people will assume that it is an assistant. It knows who you are, it knows what you do, it makes suggestions, it intuits things for you” (1). In a similar way that the Google search engine makes suggestions when you start to enter a search phrase, the computers of the future will be able to accurately guess what you want and will provide you with suggestions. Schmidt also believes that computers of the future will be much faster than today. 

Computer processing speed today is limited by the speed of electrons moving through metal and electronic components. Research into a new type of computer circuit is being made by Queen’s University Belfast and Imperial College London (13). Like modern circuits, the new circuit would also use metal, but on a much smaller scale than what is achieved today. The components being developed are more than 100 times smaller than the width of a human hair (13) and consist of arrangements of metal structures which interact with light in a unique way. The researchers call the tiny components “nanoplasmonic devices” (13). Instead of electrons passing through the tiny circuits, light particles (photons) would transmit data at lightning speeds. The team is developing nanoscale waveguides to direct light along a desired route and nanoscale light detectors to detect the light signals (13). The belief is that computers in the future may run at much higher speeds, allowing for greater processing power and a much smaller size. 

Having high-speed and smaller components are not the only goals for future computers. Some believe that components of the human body, such as neurons and DNA, can one day be used to advance computer technology to a new level. Researchers from I.B.M. and four universities are currently working on a project to create a computer that mimics the brain. The four universities--Cornell University; the University of California, Merced; Columbia University; and the University of Wisconsin--and I.B.M. started the project in 2008 (8). A large and complex project with a broad goal to use the brain as an inspiration for a computer, the project, over time, began to focus on developing a computer that somewhat resembled the brain in its structure. 

Unlike modern computers, the brain is made of billions of neurons, synapses, and complex pathways. The synapses act as data storage centers and link the neurons to each other. Electrical impulses rapidly pass through the axons--cores of the neurons--and get stored and transmitted by the synapses. In a computer, the data storage center is separate from the processor (8). A communications channel, a bus, links the two together (8).  

The team from the four universities and I.B.M. has developed a “neuromorphic” computer chip that attempts to copy the structure of the brain (8). It contains 256 neuron-like nodes connected to 262,000 data storage modules resembling synapses (8). When connected to a computer, the chip allows it to recognize numbers written by a person. The computer connected to the chip has also learned how to play Pong, a primitive computer game. The computer is still in the developmental stages, but its future applications are numerous. According to scientists on the team, neuromorphic computers could guide robots through battlefields and allow robots to be trained instead of just programmed; neuromorphic computers in health-care monitors could alert nursing-home staff when a resident is sick; and neuromorphic computers could provide sight to blind people through a high-tech prosthetic eye (8). Even if these concepts become realities, scientists admit that neuromorphic computers will not be able to exactly resemble the structure or functioning of the human brain. The brain is an organ we still do not fully understand. Copying it exactly would be impossible.

Another interesting idea for the future of computing that relies on components of the human body is being pursued by Jian-Jun Shu at the Nanyang Technical University in Singapore. Shu’s idea is that computers may one day be based on DNA. One problem with modern computer circuits is that as computer components get smaller, they tend to heat up faster. Another problem is that the binary system--zeros and ones--used by all computers today has limits when computers are trying to solve highly complex equations. Shu told PhysOrg.com that, “With DNA-based computing, you can do more than have ones and zeroes. DNA is made up of A, G, C, T, which gives it more range. DNA-based computing has the potential to deal with fuzzy data, going beyond digital data” (10).

Shu and his students are able to manipulate DNA strands by combining or splitting them. The DNA strands will, according to Shu’s model, store information which can then be retrieved and used for computation. Shu explained to PhysOrg.com that, “We can join strands together, creating an addition operation, or we can divide by making the DNA smaller by denaturization. We expect that more complex operations can be done as well” (10). At this point in time, DNA computers are just a concept, without any real prototype. One day, DNA, the very substance that controls how we look and how we grow, might be used to speed up computers. The range of applications of such DNA computers really is beyond what we can perceive right now. The processing power of such a computer would be tremendous. 

While the full range of the applications for DNA and neuromorphic computers is really beyond our full comprehension right now, the applications for our current technology are starting to be realized. The first long-distance test drive of autonomous, or self-driving, vehicles was done in 2010 during the VisLab Intercontinental Autonomous Challenge (5). A number of vans, equipped with a sophisticated array of equipment, drove from Italy to China with little human intervention, for the 2010 World Fair in Shanghai (5). That same year, Google rolled out its own fleet of autonomous vehicles. 

Now, imagine for a moment that you are driving through California, down Highway 1. You look to your left and see a grey Toyota Prius with a strange device mounted to the roof. Two people are inside, but the guy sitting in driver’s seat doesn’t appear to be driving. His hands are resting on his lap, but the car is staying perfectly in its own lane. You’ve seen one of Google’s seven autonomous test cars. As if to prove that it was not only limited to the search engine and software business, Google has launched its own fleet of self-driving cars. 

Google’s Toyota Priuses are each equipped with a high tech array of sensors, processors, and cameras (9). A device call a lidar, attached to the top of the experimental car, records a detailed map of the surroundings. Hanging from the car ceiling and aimed toward the front of the car, through the window, a video camera provides video of the road ahead. Through it, the onboard computer can recognize obstacles and people in its path and respond appropriately. Three radar sensors in the front and one in the rear provide input about the positions of cars and other objects nearby. And, if that’s not enough, a position estimator measures movement made by the car and helps the onboard computer to accurately locate its position on a map. 

A technician, seated in the front passenger’s seat, monitors a computer screen while a hired “driver” sits in the driver’s seat and watches the Prius drive itself (9). If something were to go wrong, the “driver” could tap on the break and regain control of the car. The system that Google developed for its self-driving cars has proven to be very reliable. The seven test cars drove a total of 140,000 miles with little human intervention (9). It is estimated that it will be more than eight years from now when self-driving cars will be on the market, but Google’s cars have proven that computer-controlled vehicles can be very safe. Because of Google and other tech companies experimenting with self-driving vehicles, Nevada has become the first state to legalized self-driving cars (6). Five other states, including California, are considering legalizing the novelty as well (6).

Google is not the only software company to use software in applications apart from the desktop computer. Microsoft has been attempting to visualize what the future may hold for the home. A project that started in the 1990s, the “Microsoft Home” contains gadgets that Microsoft believes may found in homes of the future. The house is located at Microsoft’s campus in Redmond, Washington. First built in 1994, the house has undergone a number of changes and updates over the years (11). The latest version is the 2011 Microsoft Home. That year, Jonathan Cluts, director of consumer prototyping and strategy at Microsoft, led a tour through the house, demonstrating its amazing features. Placing his hand on a hand scanner at the door, he waited briefly for it to unlock before stepping through. 

Inside, Cluts spoke to the central home computer system, “Grace, what’s up?” A female voice responded with information about appointments, messages, weather forecasts, and traffic (11). The home features a teenager room complete with walls that display moving images and changeable background themes; a countertop that displays recipes and appliance manuals, which can be accessed by voice or gesture; a thin, glass display screen which can play movies, TV, or music (11); a “smart” digital bulletin board (7); and much more. Cults told Fox news, “The home will sense humans and know our gestures and actions” (2).

Radio frequency identification chips in containers and other household objects help the central computer to identify, catalogue, and monitor objects within the house. For instance, when the refrigerator is low on milk, the central computer, Grace, will annouce it (2). Instead of having a personal robot to do everything for those in the house, the technology in the house itself will work with members of the household to make their lives more comfortable and convenient. Cluts believes that an “exciting” technology to be found in the home of the future would be the ability to tell the central home computer voice commands and have them performed immediately (2). In the Microsoft Home, dimming the lights, hearing your email read to you, or turning on the TV only requires a vocal command (2). Cluts believes that such homes are not that far in the future. He thinks they’ll be on the market by 2015 or 2020 (2). Whether people will be able to afford them or not remains to be seen.

In conclusion, we took a brief glimpse at the history of computers and the internet, starting with the Z1 computer. Then, we saw how scientists are working on improving computer speed and processing power by using nanotechnology, neuromorphic chips, and DNA. Finally, we learned how scientists, engineers, and technicians have used modern technology to create a self-driving car and a high-tech, “smart” house. There are other developments beside the ones we looked at, but they are beyond the scope of this article. Whether or not these ideas and goals for the future of computer technology ever happen, we certainly will be living in a different world twenty years from now.









Works Cited

(1) Ahmed, Kamal. "Google's Eric Schmidt predicts the future of computing - and he plans to be involved." telegraph.co.uk. Telegraph Media Group Limited, 5 Feb. 2011. Web. 1 March 2012.

(2) Brandon, John. "The Digital Home of the Future, Revealed Today." foxnews.com. FOX News Network, LLC, 14 March 2011. Web. 2 March 2012.

(3) Coates, James. "Foggy Minds--and Dazzling, Flawed Computers." chicagotribune.com. Tribune Company, 21 Jan. 1996. Web. 1 March 2012. 

(4) "Computer History Timeline." history-timelines.org.uk. History-Timelines.org.uk, n.d. Web. 1 March 2012. 

(5) Halley, Drew. "Robot Vans Drive, Driverless, from Italy to China (Video)." singularityhub.com. Singularity Hub, 4 Aug. 2010. Web. 5 March 2012.

(6) Hirsch, Jerry. "Self-driving cars: Bill would set rules for a new automotive era." latimes.com. A Tribune Newspaper website, 29 Feb. 2012.

(7) Lai, Eric. "Microsoft: Future homes to use smart appliances, interactive wallpaper." computerworld.com. Computerworld Inc, 29 Sept. 2006. Web. 1 March 2012.

(8) Lohr, Steve. "Creating Artificial Intelligence Based on the Real Thing." nytimes.com. The New York Times Company, 5 Dec. 2011. Web. 1 March 2012. 

(9) Markoff, John. "Google Cars Drive Themselves, in Traffic." nytimes.com. The New York Times Company, 9 Oct. 2010. Web. 1 March 2012. 

(10) Marquit, Miranda. "The next computer: your genes." PhysOrg.com. PhysOrg.com, 16 May 2011. Web. 2 March 2012.

(11) "Microsoft Facility Helps You Make Yourself at Home in the Future." microsoft.com. Microsoft, 8 Aug. 2011. Web. 2 March 2012. 

(12) Peter, Ian. "The beginnings of the Internet." nethistory.info. www.nethistory.info, 2004. Web. 2 March 2012.

(13) "Super-Fast Computers Of The Future." sciencedaily.com. ScienceDaily LLC, 1 Sep. 2009. Web. 1 March 2012.


Monday, February 27, 2012

How to Be a Good Listener in a Lecture and One on One: The Success of Active Listening

We have done it every day of our lives, ever since early childhood. It is something we rarely think about when we’re doing it, but it is just as important as sharing our thoughts to with others. It is a form of communication just as much as talking is. Listening is an art which we all, including the author, can improve on. Studies show that people remember only 25 percent of what they hear. That means that we are only truly listening to a quarter of the information, thoughts, or feelings being told us. Good listening skills are important for success in college, in business meetings, in careers, and in relationships.
Many marriages can be repaired when couples listen to each other. John Gottman, psychology professor of the University of Washington, learned after 25 years of studying marriages that “active listening” is highly important in solving conflicts2. We will be looking at two types of listening: listening one on one and listening in an audience. Actively listening to a lecture or speech can be very rewarding as opposed to merely hearing a speech. The difference between listening and hearings is that hearing is just letting sound enter your ear, while listening is paying close attention to what is being said.

We will focus first on listening in an audience. Listening to a speaker, such as a professor, is highly important if you want to succeed at your class or job. Companies lose thousands of dollars each year due to poor communication. According to a study by SIS International Research, roughly 70 percent of the employees of a business waste an average of 17.5 hours each week dealing with problems caused by poor communication1. That’s a huge chunk of time and money lost! So, how can we improve our listening skills? A study conducted by Larry Vandergrift, a University of Ottawa researcher, revealed some interesting facts about how to improve listening skills in the auditorium.

The following seven points will help you be a better listener of a lecture1:


1. Have goals for what you want to learn from the lecture or speech. Before you go to the lecture, predict what you think the speaker is going to say and think about what you would like to learn from the speech.

2. Before going to the speech, mentally review what you know about the topic of the speech. Your knowledge is a foundation which you can then build on. When you review your knowledge on the subject before the speech, you’ll be more aware of how the new information from the speech fits in with what you already know. This makes it easier to learn. Learning based on past knowledge is important for progress.

3. When you listen to the speech, listen for what is important or relevant to you and take notes. The things that stand out to you are important, and writing them down is a must. Often, we can’t easily remember everything we’ve heard.

4. Don’t get distracted. The people around you text messaging, the speaker’s resemblance to someone you know, the people standing up and leaving the lecture, and your own thoughts can cause you to get easily sidetracked. Don’t let them. Keep your eyes on the speaker and your thoughts on what is being said and what it means.

5. Don’t get thrown off by confusing or unfamiliar ideas, words, or details. Many professors and some speakers use words that are unfamiliar. Or, they may talk about things that go over your head. The key is to not be distracted from the points the speaker is trying to make.

Monday, February 20, 2012

Amazing Biblical Artifacts Unearthed and the Discovery of a Lifetime

A truck carrying half a dozen German soldiers rumbles down a dusty, desert road. Hanging onto the underside of the truck and slowly moving toward the front, he grits his teeth, knowing that what he is about to do could cost him his life. Working his way toward the front of the truck, the adventurer reaches for the truck grill and pulls himself up onto the hood of the truck. Sometime later, having lost the escorting vehicles and soldiers, he reaches a dock where the truck is unloaded, and a box containing an ancient artifact is hoisted up onto a ship. The artifact is no ordinary artifact. It happens to be the one and only Ark of the Covenant.

Millions of people around the world have watched Indiana Jones: The Raiders of the Lost Ark and the sequels. The Ark of the Covenant, a mysterious, gold-covered box which contained the Ten Commandments, among other things, has been searched for by many for thousands of years, but never found. However, there are many other artifacts of biblical significance uncovered in recent years and in the past century. While they are not nearly as spectacular as a discovery of Ark of the Covenant would be, they are, nevertheless, eye-opening and deserving of attention.

Shishak (Shoshenq I) Relief

Artifacts depicting images of biblical events have been discovered all over the Middle East. One insightful depiction can be found in the south wall of the Great Temple of Amon at Karnak, in Egypt (5). A huge sunken-relief of Pharaoh Shishak (or Shoshenq I), with a number of small depictions of ancient Hebrews surrounding it, catches one’s attention while walking through the temple. Commissioned by Shoshenq I, the relief of the pharaoh also contains writing describing a campaign in Israel where he sacked a number of cities and took the plunder back to Egypt with him (5). The Bible records Shishak’s campaign in 2 Chronicles 12:1-9* and in 1 Kings 14:25. Rehoboam the king of Judah “forsook the law of the LORD, and all Israel with him” (2 Chron. 12:1). Then, “in the fifth year of king Rehoboam Shishak king of Egypt came up against Jerusalem, because they had transgressed against the LORD…and the people were without number that came with him out of Egypt…. And he took the fenced cities which pertained to Judah, and came to Jerusalem” (2 Chron. 12:2-4).

According to the Bible, the people in Jerusalem turned back to God and humbled themselves after hearing from a prophet called Shemaiah that because they had forsaken God, God would leave them in the hand of Shishak (2 Chron. 12:5). From reading 2 Chronicles 12, we learn that when the Israelites turned back to God, God did not allow Shishak to destroy them. But, God allowed Shishak to plunder King Rehoboam and the Temple of Solomon, in Jerusalem. The Jews would be Shishak’s servants for a time.

The Victory Relief of Shoshenq I records that he attacked various cities in the northern kingdom of Israel in addition to cities in the southern kingdom of Judah (11). Further confirmation that this event recorded in the Bible and in the Temple of Amon actually occurred is found in Israel, at the site of Megiddo. At Megiddo, a section of a stela (an upright stone slab) was discovered in 1926 during some excavations (11). On this stela, commemorating Shishak’s victory, his name can clearly be seen carved into the stone (11).



Sennacherib Palace Relief

Depictions of events described in the Bible are not limited to Egypt. Located in northern Iraq in the ruins of ancient Nineveh is the Palace of Sennacherib. All that is left of the magnificent home of the Assyrian king, King Sennacherib, is the palace foundation and some of its walls. One particular wall is still mostly intact. On it are numerous bas-reliefs depicting Sennacherib’s successful siege of Lachish, an ancient Israeli city. The main scene is of the Assyrian attack on the wall of Lachish (12). Battering rams built into four-wheeled vehicles are slamming into the wall, under battlements. The Israeli soldiers defending the city are fighting fiercely, as are the besiegers (12). An epigraph states: “Sennacherib, king of the world, king of Assyria, sat upon a nimedu- throne and passed in review the booty (taken) from Lachish (La-ki-su)” (Pritchard 201, parentheses in orig.).

Friday, February 10, 2012

Medicine of the Future: The Amazing Developments in Medical Technology



"An engineer, a mathematician, and a computer programmer are driving down the road when the car they are in gets a flat tire. The engineer says that they should buy a new car. The mathematician says they should sell the old tire and buy a new one. The computer programmer says they should drive the car around the block and see if the tire fixes itself." ~Anvari.org

We have entered the second decade of the twenty-first century. Today, affordable smart phones are widespread, computer game graphics look almost life-like, computer animation is almost indistinguishable from actual footage, remote-controlled drones patrol the skies, and Google maps provide street views of practically any city on Earth. What’s more, every year, the storage capacity of the average computer hard drive increases along with the computing power. We are living on the threshold of what could be a highly advanced future.

Along with the computer technology, medical technology is also advancing rapidly. Micro-computers, bionic limbs, artificial organs, nanotechnology, and lab-grown organs can potentially improve the quality of human life and change modern medicine. Such changes may take some time to be fully realized, but they are in their infancy today.


Micro-Computers and Nanotechnology


Micro-computers are a fascinating concept, and, until fairly recent years, they were only just a concept. But, today, the concept has become a reality. The phrase “worth your weight in salt” does not apply to micro-computers. One such computer that has actually been manufactured is smaller than a grain of salt (4). Professors Dennis Sylvester and David Blaauw, from the University of Michigan, have created a tiny, millimeter-long computer that contains a battery, a central processing unit (CPU), sensors, a tiny radio emitter, and electronics for powering the chip (4). The tiny computer is powered by light, requiring 10 hours of indoor lighting or 1.5 hours of sunlight exposure (4). The device is designed for being inserted into the eyeballs of glaucoma victims. It collects data with sensors and transmits the data through a radio wave (4). If there is too much internal pressure, the chip will transmit the data to medical professionals who will know what to do with the patient. Regarding this incredible technology, Sylvester said, “This is the first true millimeter-scale complete computing system. Our work is unique in the sense that we're thinking about complete systems in which all the components are low-power and fit on the chip. We can collect data, store it and transmit it. The applications for systems of this size are endless” (5).

Another kind of micro-computer is in the process of being developed. Unlike Sylvester and Blaauw’s micro-computer, this one would use DNA for its electrical components. At the Hebrew University of Jerusalem a team of scientists has created the first DNA logic gates (3). Like their non-biological counterparts, the DNA logic gates represent one of two possible states, such as the zeros or ones of binary code (3). When one of two inputs was present at a DNA logic gate, the gate fluoresced, giving off light. And, when both of the two inputs or neither were present, the gate ceased fluorescing. This is similar to how a computer logic gate works. The DNA logic gates, when connected together and injected under the skin, may be able to form a biological-based computing system that can detect, diagnose, and treat common sicknesses or medical conditions (3).

Speaking of computers, a fairly new technology field has been gaining ground in recent years. Ever since Don Eigler of IBM spelled out “IBM” with 35 individual xenon atoms in 1989 (13), nanotechnology has been making many breakthroughs. Unlike most technology, which is easily visible to the unaided eye, nanotechnology deals with components much smaller than the head of a pin. Instead of being measured in meters, these components are measured in nanometers. To get a picture of how small this is, a billion nanometers can fit in one meter. Some examples of nanotechnology already in use would include carbon nanotubes (made out of billions of individual carbon atoms). These are currently being used to give extra strength to mountain bikes, golf club, and other high-end sporting equipment (7). Because they are composed entirely of carbon atoms, carbon nanotubes are used in water purification systems. Carbon, which is found in filters and diamonds, is good at attracting impurities and has a strong bonding arrangement.


Carbon Nanotube

Nanotechnology also has great promise for the future of medicine. One application of nanotechnology to the medical field is through the use of nanobots--microscopic machines made out of molecules--for fighting infection. Researchers at the Southwest UK Paediatric Burns Centre at Frenchay Hospital in Bristol have teamed up with scientists at the University of Bath to develop a “dressing” that kills pathogens (such as bacteria) by releasing antibiotics from “nanocapsules” (12). The harmful bacteria produce toxins which eat through the “nanocapsules”, releasing antibiotics (12). If this is perfected, the way doctors treat diseases may change. A patient may find that all he or she needs to do to recover from an illness is to simply swallow a pill: a pill filled with “nanocapsules”. Some other possibilities for nanotechnology in medicine might include nanobots for repairing damaged cells, nanobots for accelerating bone repair, and nanobots for killing cancer cells (14). Yes, you read it correctly, nanotechnology is thought to be a possible cure for cancer.

Bionics


The i-LIMB

Nanotechnology also has another application in the developing area of medical technology called bionics. Imagine that you lose both your hands. Now, you are unable to work or do a lot of the things you enjoy. But, there is no need to worry. All you have to do is purchase an i-LIMB and have it installed. It sounds like it could be something made by Apple along the same lines of an iphone or ipod, but the i-LIMB is not another phone or portable computer. It is a prosthetic, robotic hand, created by Touch Bionics, that allows users to pick up a variety of objects, including glasses, playing cards, and suitcases. It works by detecting tiny electrical signals from arm muscles to control the movements of its individual, robotic fingers, wrist, and thumb (11). Bionic legs that work in a similar way to the i-LIMB are also on the market.

Thursday, February 2, 2012

The Amazing Eye

Nearly everyone is born with it. It is more sensitive than the best scientific equipment we possess today, being capable of detecting a particle smaller than an atom. Without it, we would have no idea what the difference would be between a blue mustang and a yellow mustang of the same model, or the difference between white and black. What are we talking about? You guessed it: the eye. Without it, we would not have vision.

Vision is important to most people. We would have difficulty navigating our surroundings and writing articles, like this one, if we did not have it. Millions of people around the world do all they can to improve their sight. Some get contact lenses and glasses, spending hundreds of dollars, while others receive laser-eye-surgery, spending thousands. This question should be of interest to those who want to have good vision: “How do we see images?”

To answer this question, we must first take a quick look at the basic components of the human eye. The human eye is a complex organ, more sensitive than any device created by scientists or engineers. The components of the eye that are visible from the outside include the sclera, the cornea, the iris, the pupil, the lens, and the anterior and posterior chambers. Just like a digital camera, the eye has a dark interior, a diaphragm for contro lling light levels, a sensor for capturing the images projected on it, and a lens that automatically focuses (8). The sclera is the white part of the eye. Like a camera’s interior, the sclera is brown, or dark, on the inside. This allows it to absorb light to keep the images received by the brain from being washed out. Connecting to the sclera and covering the iris is a transparent membrane called the cornea. The cornea is a crystal-clear membrane which consists of five layers, totaling half a millimeter thick (2). It alone accounts for two-thirds of the eye’s focusing power (2).

Lying directly underneath the cornea is the iris, which contains the pigment melanin. Melanin in the iris can produce varying shades of blue, green, and brown, depending on the amount and distribution of the melanin. The iris encircles a hole called the pupil. As the iris sphincter muscle contracts or expands, the iris changes size, causing the pupil diameter to expand to a maximum of 7 millimeters or contract to a minimum of 3 millimeters (6).

Friday, January 27, 2012

What Makes Blonde People Blonde? The Interesting Facts About Hair


 Special Note: This article departs from my normal style, but I have included it here because the subject of hair styles and hair types is popular. Blonde jokes have done their share to contribute to the general interest in hair types. This article focuses on some of the interesting scientific facts about hair.

"A blonde was cruising down the highway at breakneck speed when a cop pulled her over.
'May I see your license and registration, please?' asked the cop.
Miffed, the blonde said, 'I wish you guys would get your act together. Just yesterday you took away my license. Now today you want me to show it to you!'" ~blondejokes.com


 What makes blonde people blonde or curly-haired people curly-haired? Genetics would likely be your answer. So then, what makes blonde hair blonde or black hair black? And, what exactly is hair composed of and how does it grow? We will attempt to answer all these questions and more in this article on the subject of hair.


You might think it an odd subject to focus on, but the subject of hair has occupied the attention of many people, both male and female, for thousands of years. Early civilizations had dress styles and hair styles which might seem outlandish to us today, but were quite popular back then. Ancient Egyptians preferred to be bald, ancient Greek men commonly wore beards, and some of the Mohawk Indians shaved the sides of their heads. Today, people style and dye their hair in a way similar to how their ancestors did long ago. People with curly hair, during different periods, have straightened it and, at other times, people with straight hair have curled it.

So, what makes straight hair straight and curly hair curly? The type of hair one has depends on the shape of the cross-section of their hair shafts. Imagine that a hair shaft is the size of a telephone pole. Say that we have a straight hair shaft. When we chop it in half and look at the cross section, we see a circle. When we chop a wavy hair shaft in half, the cross section is oval-shaped. Looking again at our enlarged hair shaft, we notice that it appears black and shiny in the sunlight. When we bring it into the shade, it no longer has a sheen. Why is this and what gives hair its color?