Autistic Disorder - One Term, Many Meanings

Wednesday, August 25, 2010

Though autistic disorder is often thought of as a single condition, autism spectrum disorder (ASD) in fact refers to five different kinds of autism. When most people think about autism, they're only thinking about one type - the kind they saw in the movie Rainman - and don't realize that this is only a fifth of the disorders to which autism truly refers.

The first type of autism is called Classic Autism. It's also known as Kanner's autism, or Kanner's disorder after a doctor who researched the condition in the 1930s through the 1940s. Classic autism is one of the lower functioning forms in the spectrum, and is identified by its high level of social and communication issues. Children with the classic autistic disorder rarely interact with the majority of people. They often suffer from poor motor skills and frequently repeat actions and motions. They are also generally reluctant to make eye contact and may display temper tantrums when they experience a change in their usual routine or environment. Though some individuals with classic autism are completely verbal, many struggle to communicate through speech, and others cannot speak at all.

The second form of autism is referred to as Rett's Syndrome. This type off autistic disorder is another low-functioning one. Rett's is exclusive to females and often occurs in conjunction with mental retardation. Rett's girls are typically impaired in their movements and will rarely communicate verbally. Studies have concluded that Rett's is passed on genetically, though no hypothesis has yet to be proven regarding the reason that it occurs only in girls when all other types of autism occur in boys 75 percent of the time.

The third kind of autistic disorder is Childhood Disintegrative Disorder. Children with this form of autism often develop normally, or are diagnosed with Classic Autism or Rett's Syndrome. However, the diagnosis changes with the speech and motor skill problems that accelerate. Regression occurs between the ages of two and four for an unknown reason, though it is suspected that it may be brought about by illness or types of surgery. This hypothesis has yet to be proven.

The fourth form of autistic disorder is Asperger's Syndrome. It is easy to misdiagnose children with this disorder as they generally have better social and communication skills than other autistic children, but they still face limitations. It typically isn't until these children begin school that these limitations start to become obvious. Children with Asperger's often do very well with behavioral treatments and are able to exist quite well within a normal lifestyle when they begin these therapies as early as possible.

The last form of autistic disorder is also the most vague. It is called PDD-NOS (Pervasive Developmental Disorder - Not Otherwise Specified) and is the diagnosis given to children who are believed to have autism but whose condition does not fit the typical definition of the other four types of autistic disorder. These children may have some symptoms that match some of the autism forms, but do not have a specific kind of autism that can be diagnosed.

Part of the understanding of autism comes along with the knowledge of where the disorder may have come from and what can worsen the symptoms. There are many different theories, including the impact that allergies can have on an autistic child.

For some autistic people and relatives of those on the spectrum the autism disorder classifications are two broad and there is a belief that effective treatments are unlikely to be discovered until the spectrum is broken down further. A common phrase within the autism arena goes like this...'when you've met one person with autism, you've met one person with autism'. This phrase highlights the diversity of symptoms and abilities of people grouped together under the spectrum umbrella and confirms the complexity of this disorder.

Plasma-Carbon Symbiosis and Bioplasma Body Fusion

Biologists are beginning to realize that co-operation was just as important as competition in the evolution of life's diversity and resilience. Every cell in the human body contains a mitochondrion which is thought to be a bacterial cell which invaded an early eukaryote. Instead of being digested, both cells tolerated each other and began to live with each other - a merger which provided synergies to both. This is a startling example of symbio-genesis. But then every multi-cellular animal or plant is also an obvious example of co-operation rather than competition. More than a 1,000 trillion cells are living peacefully and co-operating in your body; together with 500 to 100,000 species of bacteria. In fact, there are about ten times as many bacteria as human cells in the human body.

Lynn Margulis, member of the National Academy of Sciences and Distinguished Professor at the University of Massachusetts, has argued that random mutation, claimed to be the main source of genetic variation is of only limited importance. Much more significant is the acquisition and integration of new genomes by symbiotic merger. But of course, she was confining herself to only carbon-based life forms.

The "Parallel Earth" hypothesis proposes that a counterpart dark matter Earth co-accreted with the visible Earth in the embryonic Solar System. According to dark plasma theory, dark matter consists largely of a plasma of very high energy non-standard particles (sometimes of a different parity) - or "dark plasma". On this counterpart Earth, life flourished, just like it did on our visible Earth. The difference was that the life forms were plasma-based. Two different substrates, plasma and carbon, gave rise to life-forms in two different habitats.

According to the "Dark Panspermia" hypothesis proposes that meteorites, asteroids and comets, containing both the dark and visible building blocks of life fell into habitable zones and generated the first single-celled and later multi-cellular life-forms which developed both ordinary and dark bioplasma bodies that were coupled to each other. Hence, even when life began on the visible Earth, plasma life forms were already forming symbiotic relationships with the abundant carbon-based life forms on our counterpart Earth.

Inter-Substrate Plasma-Carbon Symbio-genesis

"Symbiosis" is a term used to describe a close ecological relationship between the individuals of two (or more) different species. Sometimes a symbiotic relationship benefits both species, sometimes one species benefits at the other's expense, and in other cases neither species benefits. It has been observed by metaphysicists that the symbiotic relationship between the bioplasma and carbon-based bodies is one of "mutualism" where both species benefit. (At least one leading metaphysicist, however, describes the relationship as "parasitic".)

Practically all carbon-based life forms today, including homo sapiens, had symbiotic relationships with plasma-based life forms. Hominids are the products of a symbio-genesis between a carbon-based and plasma-based life form. Unlike other animals, however, carbon-based hominids were able to utilize the alternative cognitive-sensory systems of their plasma-based symbiotic partners. Their unique brains allowed them to activate the higher energy bioplasma bodies that co-evolved with the carbon-based body without necessarily having any conscious awareness that they were accessing a different cognitive system. Relationships developed between the lower energy carbon-based bodies of hominids and the higher energy bioplasma bodies and were sustained for several millions of years up to the present.

When certain brain circuits in the biochemical brain (particularly in the parietal and temporal lobes) were disabled, the locus of consciousness from the carbon-based body of a human was transferred to the plasma-based body. During REM (Rapid Eye Movement) sleep the carbon-based body processes information from the bioplasma body.

According to dark plasma theory, there is a higher energy correlate of the (carbon-based) physical-biochemical fertilized egg. This correlate, usually in its adult form, is often called the "(etheric) double" in the general metaphysical literature. It is often observed as a replica of the carbon-based body but operates on an electromagnetic platform, being a bioplasma body. It is classified here as a "Level 3 bioplasma body". ("Level 3" signifies that the body inhabits a universe which has 3 spatial dimensions and 1 time dimension, just like the carbon-based body.)

Type 1 Bioplasma Body Fusion

"...all visible organisms, plants, animals and fungi evolved by "body fusion." Fusion at the microscopic level led to genetic integration and formation of ever-more complex individuals." - Lynn Margulis, Acquiring Genomes, 2002

According to the metaphysical literature, the Level 3 bioplasma body originates and usually dies together with physical-biochemical body (which is also a "Level 3" body) or a short time thereafter. This is not surprising as the age of this bioplasma body approximates the carbon-based body since both bodies originates at about the same time in a particular lifetime. However, in certain cases, for example accidental death, the still healthy and undamaged bioplasma body (the "donor") decouples from the carbon-based body. Subsequently it may absorb (or fuse) with an embryonic Level 3 bioplasma body which is coupled to an embryonic carbon-based body (the "recipient"). This "body fusion" gives rise to a new Level 3 bioplasma life-form and is quite rare. It does not amount to a symbio-genesis as both bioplasma bodies are of the same species.

It was reported by Paul Pearsall (in his book the The Heart's Code) that recipients receiving a donor's heart during heart transplants may experience certain emotions and even "cellular memories" of the donor. Recipients have reported inheriting everything from the donor's food cravings to knowledge about his murderer - information that in one case led to the killer's arrest. Similarly, in Type 1 Bioplasma Body Fusion (where the bioplasma bodies being fused are relatively close in frequency on the electromagnetic spectrum to the physical-biochemical body) the carbon-based body of the recipient may be impacted by certain events or even the appearance or phenotype of the donor.

In certain cases, features that were associated with the carbon-based body of the donor which impacted the donor's Level 3 bioplasma body was transferred to the carbon-based body of the recipient through the fusion of the donor's bioplasma body with the recipient's bioplasma body. In this type of symbio-genesis, typical of most reported reincarnation cases, memories relating to the donor may be accessed by the recipient. This usually most easily occurs when the recipient is young and has not completed the full development of the brain. Access to the memories of the donor will be increasingly lost as the brain prunes its neural networks.

Ian Stevenson, a scientific researcher in reincarnation-type cases, has discovered that certain Burmese children who remembered their "previous lives" as British or American Air Force pilots shot down over Myanmar during World War II have fairer hair and complexions than their siblings. Distinctive facial features, foot deformities and other characteristics were carried over from one life to another. Most often birthmarks resembling scars from physical injuries have been carried over. In one case, a boy who remembered being murdered in "his former life" by having his throat slit had a long reddish mark resembling a scar around his neck. A boy who remembered committing suicide by shooting himself in the head in his previous incarnation had two scar-like birthmarks that lined up perfectly with the bullet's trajectory, one where the bullet entered and another where the bullet exited. Stevenson has gathered hundreds of these cases and has published articles in authoritative journals, including the Journal of of the American Medical Association .

These cases suggest that a transfer of characteristics and memories took place a short time after the death of the carbon-based body and that the Level 3 bioplasma double had not disintegrated completely. In these rare cases, even certain physical features and attributes, which were developed during the previous carbon-based lifetime, may appear again in the new carbon-based body.

In these cases, the Level 3 bioplasma body had been impacted by the physical deformation to the previous carbon-based body and transmitted it as a feature in another carbon-based body which is not genetically linked with the first body. In other words it is a Carbon-Plasma-Carbon transfer of characteristics which suggests some form of cross-substrate imprinting.

Type 2 Bioplasma Body Fusion

In the majority of cases, however, Type 2 Bioplasma Body Fusion occurs. In this case, the bioplasma body at the next higher energy level (commonly referred to as an "astral body" in the metaphysical literature but classified here as a "Level 4 bioplasma body") fuses with an embryonic bioplasma body at the same energy level which is linked to a Level 3 bioplasma body. ("Level 4" signifies that the body inhabits a universe which has 4 spatial dimensions and 1 time dimension.)

In this case, memories of the fusing Level 4 bioplasma body may be difficult to access unless certain brain circuits within the carbon-based physical-biochemical body are switched-off either chemically (through psychoactive drugs), physically (through surgery, transcranial magnetic stimulation which simulate brain lesions or accidents) or psychologically (through deep meditation or hypnosis). During a near-death experience ("NDE") the "light" which accompanies and reveals itself to the NDEer (in his or her Level 3 bioplasma body) is usually the Level 4 bioplasma body.

Best Cardio to Burn Belly Fat

Everyone knows spot reduction is impossible, right? Well, guess again! One of Australia's top fat loss researchers published a study showing that interval training burns belly fat specifically. So if you are looking to lose inches from your waist, you have to add interval training to your workout program.
Fortunately, you can drop the slow cardio, add intervals, and still save time from your workouts. Here's why.

The Australian study compared a 20-minute interval training workout (done 3x's per week) against a 40-minute slow cardio workout (also done 3x's per week). Women did the workout for 15 weeks, and only the interval group lost belly fat. The cardio group got practically no results at all.

So spot reduction is possible, as long as you don't expect slow cardio or endless crunches to do the trick. Instead, you need to use interval training. According to Professor Steve Boucher, the Australian co-author of the latest interval training study to show intervals work better than slow cardio, "high intensity intermittent exercise may result in greater fat loss in the abdomen".

Basically, interval training burns stomach fat first, over all other sources of fat on the body.

Now we all have heard that spot reduction doesn't work. If you haven't, here is the story. For some reason, many people think that by doing tons of crunches, they will burn stomach fat. Unfortunately, that just isn't true.

In fact, Boucher quotes the following example...

"...researchers have examined the fat content of elite tennis players' racket arm. The logic here is that if a tennis player uses his racket arm much more than his other arm then the fat content should be less. Racket arms of tennis players usually possess greater muscle and bone mass but similar fat levels."

So here's the odd thing about Boucher's theory...Notice that he's not claiming sprint interval training done on a bike will burn more fat around your legs. Instead, he's claiming that interval work done by your legs will lead to a spot reduction of fat from around the belly. Completely backwards to what the beginner exerciser thinks. Boucher also says this interval program will work really well in men with lots of abdominal fat - so its not just for women.

So why do the intervals work so well?

Boucher believes it has something to do with the increase in hormones called "catecholamines" (adrenaline is a catecholamine hormone). These increase after intervals, but not after slow cardio.

Catecholamines are a fat burning hormone and there are a lot of catecholamine receptors in belly fat...so he seems to think the elevated fat burning hormones from intervals ends up leading to targeted belly fat burning.

Interesting theory...we'll see if they do more research and are able to confirm that belly fat burning hypothesis. Regardless, its great to see studies showing intervals to be more effective for losing stomach fat than slow cardio workouts.

Surprisingly, Boucher recommends stationary cycling as one of the best ways to burn fat with intervals. Seems like another fat loss expert has been saying that for years now...oh yeah, it was me! I know, and research shows that using both resistance training and interval burns more fat than slow cardio workouts.

Boucher also recommends a Mediterranean diet (lots of fruits and vegetables)...another commonality with Turbulence Training (that is, the emphasis on fruits and vegetables).

So there you go...Turbulence Training years ahead of this study, but supported by the latest scientific research and the expert's hypothesis. And don't forget, Boucher and his group didn't even throw in the Turbulence Training resistance training exercises...that probably would have resulted in even more belly fat burning and loss of stomach fat.

So forget about hour-long stationary cycling workouts when you can get the same or more fat-burning benefits in 20 minutes. Exercise intensity is the most important factor determining post-exercise energy expenditure and fat loss success!

After a 5-minute warm-up, follow this sample beginner's protocol:

* Start at 15 seconds of intense effort (90% of your maximal pace).

* Follow that with "active rest" (~30% of your maximal pace) for 2 min.

* Perform 3-6 intervals.

* Finish with 5 minutes (or longer) of moderate intensity exercise.

As you become more advanced and accustomed to intervals, progress to:

* Increase your intensity to 95-98% of maximal pace (always hold a little back).

* 30- to 60-second intervals with only 30- to 60-seconds active rest.

* Try to keep your active rest the same length or longer than your work interval.

* Perform 6-12 intervals per session.

* Finish with 5 minutes (or longer) of moderate intensity exercise.

Lean Six Sigma Roadmap and Implementation Guide

If you're in the business world, you have either heard of lean manufacturing or six sigma. Almost every company in either the service or manufacturing sector has adopted one of these two disciplines as an improvement methodology.

Some businesses have adopted both methods using the term lean six sigma. It is the name most often given to the combination of lean manufacturing and six sigma principles.

The reason most companies have adopted one or the other is simply because their employees have been trained on one or the other disciplines. The lucky ones have employees trained in both lean manufacturing and six sigma, and understand it only makes sense to combine both for maximum improvement.

Lean manufacturing follows a model of Plan-Do-Check-Act (PDCA), and six sigma follows a model of Define-Measure-Analyze-Improve-Contro (DMAIC).

It takes a lot of training to be an expert in both fields, and therefore very few companies have enough expertise to implement lean six sigma. However, it is worth the effort, as it doubles the amount of business improvement tools and enables problems to be solved using the "correct" tool rather than trying to fit a certain methodology to the problem.

The roadmap to lean six sigma includes following the DMAIC phase.

First define the problem. This is often the most time consuming, as there are many competing projects. The project selection process should be based on the company objectives and value of the project. Many tools are used in the define phase, some of which are listed below.

Project Charter

Flow Charts

Process Mapping

Work Breakdown Structure (WBS)

PERT Charts

Affinity Diagram

Nominal Group Technique (NGT)

Prioritization Matrix

Gannt Charts

Voice of the Customer (VOC)

CT Trees (Critical to Quality, Critical to Schedule, etc)

Pareto Charts

Rolled Throughput Yield (RTY)

Kano Model

Measure the current state. The tools used depend on whether it's a pure lean manufacturing, six sigma, or combined lean six sigma project. However, the the tools most commonly used for this phase are:

Probability and Statistics

Data Collection

Measurement Systems

Process Level Flowcharts

Process Level Mapping

Histogram

Stem and Leaf Plots

Pareto Charts

Cause and Effects Diagram and Matrix

FMEA (Failure Mode and Effects Analysis)

Control Charts

Process Capability

Gage R & R Studies

Frequency Plots

Confidence Intervals

Process Sigma

Once the current state is measured, it needs to be analyzed. Most six sigma projects take quite a while in this phase. For example, detailed analysis of markets, machines, people, shifts, and outputs take considerable time. A pure lean project may only take a few hours or days to analyze. For example, if the project is to reduce setup times, the measure phase may take a few hours to measure the current setup times under various conditions.

Brainstorming

5 Why's

Value Stream Mappingv
Control Charts (XBar & R, np, C, U, p)

Scatter Plots

Regression Analysis

Design of Experiments (DOE)

Hypothesis Testing

The next step is improvement. This could range from a quick kaizen type project of moving machinery, making operational changes for OEE (overall equipment effectiveness) improvement, or it could be a several week DOE (design of experiment) and regression analysis process.

Design of Experiments (DOE)

Hypothesis Testing

Brainstorming

Cause and Effect Diagram

Box Whisker Charts

Process Mapping

Lean Manufacturing Tools:

    - SMED (Single Minute Exchange of Die) - Standard Operations - Kaizen - Line Balance and Takt Time - Value Stream Mapping and Analysis - OEE - Work Simplification - Methods Improvement - Error Proofing - 5S

The next step is controlling the improvement. The last thing any team wants it to obtain improvement and not sustain it. This could include training, demonstration by employees, standard operations, SOP's. or multiple process control charts.

Control Charts

    - X Bar & R - I-MR - p - np - c - U - EWMA Standard Work Visual Management Performance Management Process Mapping

Many implementers get hung up on the project time. Some training programs suggest a black belt project takes a few months on average. Most lean training talks about speed with processes such as kaizen blitz. The projects should take as long as necessary and solved using the tools necessary to obtain maximum improvement. If that means one day, then it is a one day project. If it means 3 months, then so be it.

The DMAIC process does not have to take weeks. There are problems that can be defined, measured, analyzed, improved, and controlled in a matter of days. Just because a problem can be solved quickly does not mean it is only a lean or other type of project.

However, not every project needs a process. For example, if a problem arises and the answer is known, it should be solved once and for all forever. If something needs done to sustain it, then those tools should be used. That does not mean it needs a full blown project.

The bottom line is use an improvement methodology when you need it, and use the tools necessary to solve the problem. When tools are forced on an organization rather than pulled into problems, the system is sure to fail.

Sad Sales Negotiators Do a Bad Job

In the quest to do a better job at negotiating deals, sales negotiators have been known to do some pretty wild things in order to condition themselves to perform at a high level - extreme exercising, exposure to hot / cold temperatures, and even eating some pretty weird things. However, is it possible that they've been overlooking the most important thing - how happy they are?

The Power Of Sad

Dr. Robert Cialdini has spent a lot of time studying how we can persuade others and how they can persuade us. In fact he's written a popular book on the topic titled Influence: Science and Practice in which he talks about what causes us to do things that we may not be giving a lot of thought to.

When it comes to sales negotiations, Dr. Cialdini and his peers have done some interesting studies that should cause all of us to sit up and take notice.

The Big Guess

The social scientist who were doing the research started with the hypothesis that when we get sad, we get motivated to do something to change our current circumstances in order to get out of our sad mood.

They took this thinking one step further. They also guessed that sad buyers would be willing to pay higher prices for a given product and sad sellers would be willing to sell a product for a lower price.Ã'Â Do I have your interest now?

The Experiment

The cool thing about being a social scientist is that you get to test your hypothesis on people, not rats. In this case the scientists had their (human) test subjects divided into two groups. One group watched a sad movie and then wrote a paragraph about how the movie made them feel. The other group watched a movie about fish (!) and then wrote about what they had done that day.

Next, both groups were once again divided into two groups and one group was asked to mark on a piece of paper what price they would sell an item at and the other group was asked to mark on a piece of paper what price they would buy an item at.

What the scientist discovered just might scare you. It turns out that their original guess was right: sad buyer ended up being willing to spend 30% more for an item than emotionally neutral buyers. Likewise, sad sellers were willing to sell an item for 33% less than emotionally neutral sellers. The really spooky part of all of this is that the sad buyers and sellers had no idea that their sadness had affected them so much.

Final Thoughts

Although we often get caught up in preparing for our next sales negotiation, what the social scientists have discovered is that we bring everything else that is going on in our lives to the table with us. On a similar note, the other side of the negotiating table does the exact same thing.

Before you start your next sales negotiation, you need to take a minute or two and evaluate how you are feeling. If there is anything that is bringing you down or making you depressed, then you have got to try to find a way to resolve it or at least make it better before the negotiations start. Learn to do this and it will allow you to close better deals and close them quicker.

The Scientific Method - Why Your Child Should Be Using It Often

Friday, July 30, 2010

The scientific method is not just something that you learn in grade school in order to pass science class. The scientific method is something that all of us use on a daily basis in order to make sense of the world around us. The scientific method is basically the process that we go through in our minds to ask questions and find answers. The scientific method roughly follows the following pattern:

1. The first step is to make an observation and description of a phenomenon or group of phenomena. In other words, you observe a problem or a question and define exactly what the issue is in order to make more sense of it.

2. Formulation of a hypothesis to explain the phenomena. A hypothesis is a theory or an idea of how something can be worked out or how a problem can be solved. Through the process of the scientific method you will work to prove that your hypothesis is in fact true.

3. Use of the hypothesis to predict the existence of other phenomena. In some cases your hypothesis is part of a sequence of events and you are curious to see how altering a specific occurrence will alter the overall result of a process. Your goal is to predict quantitatively (meaning with physical evidence) the results of new observations.

4. Now you take action. You perform a series of experimental tests of the predictions/hypotheses. With more complicated scientific experiments (not necessarily anything that your child will encounter in their schooling) several independent experimenters and properly performed experiments are meticulously planned and carried out. In the case of a child's science project or simple day-to-day use of the scientific method, the process of experimentation is much cruder. You can test your ideas in a matter of seconds in some cases and many of the testing that we all do is done mentally and in our homes, schools, or workplace, not in a laboratory.

5. Repeat steps 3 and 4 until there are no discrepancies between theory and experiment and/or observation. In other words, 'If at first you don't succeed, try again.' Many times our initial theories or predictions of what might happen in an experiment are wrong. There is no shame in guessing wrong. What is important is that you continue trying to formulate new ideas and solutions until you find a hypothesis that works.

The scientific method is not as complicated as you may think, and it is important that your child be using the basic outline of the scientific principle in his or her daily life. For example, say that your child is drinking juice from a plastic cup and notices that a puddle has formed under the cup. What should your child do? A child without an understanding of the scientific method is stuck and may call on a parent to just solve the problem for them. On the other hand, when a child understands how to solve problems the situation can be controlled without parental intervention. A child with an understanding of the scientific method may follow the following steps:

1. Identify the problem: There is a puddle of juice underneath my cup

2. Form a hypothesis: Maybe there is a hole or crack in my cup

3. What other phenomena are present with this hypothesis: If there is a hole or crack in my cup it should still be leaking until the juice is gone.

4. Take action: lift the cup off the table and see where the drips are coming from, find the location of the crack.

5. Repeat the process if hypothesis is not proved: I didn't see a hole or a crack, maybe it is just water condensing on the outside of the cup. Repeat the process to see if water is in fact forming on the outside of the cup.

Science Fair Projects - Making a Winning Science Project Step 1b - The Scientific Method Part 1

If you are getting ready to prepare your very own science experiment for the science fair, it's time to make sure you know everything you need to about the scientific method. The scientific method provides a basic structure that you will use when conducting your experiment. It describes the background of the experiment, the process that you will use during your research, and the steps you will take in order to come up with a conclusion to your project. Plus - During the science fair, you'll have to show the judges that you have followed the scientific method and that you understand what each step means.

No matter what type of scientific research you are conducting for your science project, you will have to use five scientific steps. Cool fact: these scientific steps are also the steps that professional scientists use when they conduct their experiments - including scientists at NASA that build space ships! Here they are:

* Research
* Problem
* Hypothesis
* Project Experimentation
* Project Conclusion

Now, here's what you need to know about each of these steps in order to create a really cool science project: Research During this step, you are deciding what experiment you want to conduct by researching different things that interest you. Research means that you get more information that might help to plan your experiment. There are many ways to get information during research. For example, you can use your own experiences, you can look information up in a book, or you can use an experiment that you may have already done in class as a starting point. I got the idea for one of my science fair projects while eating dinner one night. I realized that I could taste salty foods in one part of my mouth and sweet foods in another. I asked my dad why this happened and he said that different taste buds taste different foods and are located in different parts of the mouth. We looked online for information about where the taste buds were exactly and compared the pictures online with our own tongues in the mirror. That year, my project was about finding out where taste buds are in the mouth for everyday foods, like milk and bread, and vegetables. From that example, you can see that I started with a question that I had through simple observation. My dad and I researched the answer to my question together using books and by looking at our tongues in the mirror. Heads up: When you do your research for your own scientific experiment, make sure that you are doing the research on your own. I might have used my dad to help answer my question at first, but I used library books, experiments, and interviews (I called my doctor) in order to do my project on my own. Problem The problem part of the scientific method provides the whole purpose for the research and experiments. The problem is usually an open-ended question that you need to solve through the experiment. An open-ended question is one that cannot be answered in one or two words, such as, "Are there taste buds in the mouth." In my particular case, my problem was that I wanted to find out where taste buds were for different foods. So my open-ended question was "Where are specific taste buds for common foods?"

Expect to be surprised when writing your question. When I came up with my problem, I realized that I couldn't limit the answer to what I thought I'd learn. For example, I knew that my tongue had taste buds, but my dad had also mentioned that taste buds can occur in other parts of my mouth. That's why my question didn't say, "Where on the tongue are specific taste buds for common foods?"

Make sure you can answer your question through experimentation. Your experiment should help you to come to a conclusion about your initial problem.

Hypothesis A hypothesis is my favorite part of the scientific method because it is a statement about what you think will happen. You write the hypothesis after you have already done some of your research, but before you perform your experiment. Your experiment will prove whether your hypothesis is right or wrong. Here's an example of the hypothesis I used in my experiment: "I believe that different parts of the mouth respond to different tastes. I base this hypothesis on:

1. The front of the tongue tastes sugar, but the sides do not.
2. The sides of the tongue taste salt, but the front does not.

Here are some tips to help you with your hypothesis.

* When creating your hypothesis, it is okay to state why you think your experiment will have a particular conclusion. Remember: you have already observed through research that different parts of the tongue taste different things.
* As you go through your experiment, you might discover that you were wrong in your hypothesis. If this happens, congratulations! You've experienced something that professional scientists experience everyday...and that's why you're doing the experiment after all! Don't go back and change your hypothesis, though. It's expected (and absolutely fine) to discover that your hypothesis was wrong (sometimes the science fair judges like to see that, too!)
* It might also help to write the hypothesis down so that you remember what it is. Write it down before you start the experiment, just in case the experiment turns out differently than you thought it would.

Now you're ready to do your experiment. To learn how to do this cool next step, you'll want to read part 2 of this article. Visit this link to finish learning how to do an great science fair project. Or if you are really serious about doing an awesome project, just download your free copy of "Easy Steps to Award-Winning Science Fair Projects" from the link below right now.

Theories of Other-Race Face Identification

Five hypotheses have been offered to explain the "other-race" effect in face recognition (Ayuk, 1990; Chance & Goldstein, 1996):

1. The first hypothesis proposes an inherent difficulty between races. Members of individual racial groups are more difficult to differentiate from one another than from another racial group, hence the saying, "They all look alike." Shared similarities within racial groups create and help maintain this effect. The few experiments that investigated this proposition have shown mixed results. This hypothesis is difficult to test because difficulty in discrimination may not be due to physical sameness, but to inappropriate cue utilization. The hypothesis cannot be isolated from other effects enough to eliminate other possible factors in the other-race effect.
2. The second hypothesis proposed is that prejudicial attitudes may influence other-race identification. However, no correlation between identification and attitudes has been found (Lavrakas, Buri, & Mayzner, 1976, cf. Ayuk). Yarmey and Kent found no evidence of racial or attitudinal biases in race focus. Prejudicial attitudes toward infants, similar to the effects of the other-race effect in that "all babies look alike," showed no effect. Subjects in their study reported having positive regard toward infants. However, Brigham and Williamson (1979) found that a same-race bias did affect recognition memory of African Americans and Caucasians (cited in Yarmey, 1996).
3. Prior experience or knowledge of another race may also influence processing of a face. Previous exposure to other races is dependent on cultural issues which have been explored by Lindsay, Jack, and Christian (1991). They hypothesized that perceptual expertise was required for facial recognition. Quantity of experience affects ability to distinguish one face from another. Representations of similar faces can interfere with one another because faces appear to share the same storage space. Ng and Lindsay (1994) examined the contact hypothesis with Caucasians and Orientals from Canada and Orientals from Singapore. Contact of the other-race for Orientals from Singapore was severely limited. They were able to replicate the other-race effect in that Orientals recognized Oriental faces better than Caucasians, and Caucasians recognized Caucasian faces better than Orientals. However, differences between Canadian Orientals and Singapore Orientals for Caucasian faces was not significant. Thus, they concluded that the race issue was not related to country of origin.
4. A fourth hypothesis is that encoding strategies an individual uses for own-race recognition are also employed for recognition of other-race faces. These strategies are often less than perfect when viewing faces of another race. Although there is evidence that individuals use different cues in asking someone to describe faces of own versus other race faces, there is no evidence that links cue utilization and the other-race effect.
5. A fifth hypothesis is differential processing. Subjects use differential processing due to inferences and judgments made during initial viewing of own- and other-race faces. Related to this hypothesis is levels of processing (Craik & Lockhart, 1972). Faces of one's own race are processed deeply for possible character traits, whereas other-race faces are processed only superficially. The human face, however, is rich in perceptual information regardless of race. Part of the problem lies in perceived similarity of stimuli and elusiveness of a precise feature list used to describe a particular face. There appears to be no specific feature, but a group of features that defines a face of a given race, as has been noted in face recognition.

Traditionally, a face recognition task is used to examine the other-race effect (reviewed in Bothwell, Brigham, & Malpass, 1989). An initial set of target pictures of African American and Caucasian faces are shown to African American and Caucasian subjects. Then subjects are shown another set of faces including target pictures mixed randomly with a set of distracter pictures. Subjects are required to make an old judgment for pictures that are included in the target set and a new judgment for those faces that are not part of the target set. Subject responses are scored correct or incorrect, and recognition ability is assessed, for both correct and incorrect responses. Own-race bias exists if the recognition ability is greater for own-race pictures than for other-race pictures.

REFERENCES

Ayuk, R.E. (1990). Cross-racial identification of transformed, untransformed, and mixed- race faces. International Journal of Psychology, 25, 509-527.

Bothwell, R.K., Brigham, J.C., and Malpass, R.S. (1989). Cross-racial identification. Personality and Social Psychology Bulletin, 15, 19-25.

Chance, J.E., & Goldstein, A.G. (1996). The other-race effect and eyewitness identification. Psychological Issues in Eyewitness Identification (153-176) Sporer, S.L., Malpass, R.S., & Koehnken, G. eds. NJ: Lawrence Erlbaum.

Ng, W., & Lindsay, R.C.L. (1994). Cross-race facial recognition: Failure of the contact hypothesis. Journal of Cross-Cultural Psychology, 25, 217-232.

Yarmey, A.D., & Kent, J. (1980). Eyewitness identification by elderly and young adults.
Law and Behavior, 4, 359-371.

Theories of Other-Race Face Identification

Tuesday, July 20, 2010

Five hypotheses have been offered to explain the "other-race" effect in face recognition (Ayuk, 1990; Chance & Goldstein, 1996):

  1. The first hypothesis proposes an inherent difficulty between races. Members of individual racial groups are more difficult to differentiate from one another than from another racial group, hence the saying, "They all look alike." Shared similarities within racial groups create and help maintain this effect. The few experiments that investigated this proposition have shown mixed results. This hypothesis is difficult to test because difficulty in discrimination may not be due to physical sameness, but to inappropriate cue utilization. The hypothesis cannot be isolated from other effects enough to eliminate other possible factors in the other-race effect.
  2. The second hypothesis proposed is that prejudicial attitudes may influence other-race identification. However, no correlation between identification and attitudes has been found (Lavrakas, Buri, & Mayzner, 1976, cf. Ayuk). Yarmey and Kent found no evidence of racial or attitudinal biases in race focus. Prejudicial attitudes toward infants, similar to the effects of the other-race effect in that "all babies look alike," showed no effect. Subjects in their study reported having positive regard toward infants. However, Brigham and Williamson (1979) found that a same-race bias did affect recognition memory of African Americans and Caucasians (cited in Yarmey, 1996).
  3. Prior experience or knowledge of another race may also influence processing of a face. Previous exposure to other races is dependent on cultural issues which have been explored by Lindsay, Jack, and Christian (1991). They hypothesized that perceptual expertise was required for facial recognition. Quantity of experience affects ability to distinguish one face from another. Representations of similar faces can interfere with one another because faces appear to share the same storage space. Ng and Lindsay (1994) examined the contact hypothesis with Caucasians and Orientals from Canada and Orientals from Singapore. Contact of the other-race for Orientals from Singapore was severely limited. They were able to replicate the other-race effect in that Orientals recognized Oriental faces better than Caucasians, and Caucasians recognized Caucasian faces better than Orientals. However, differences between Canadian Orientals and Singapore Orientals for Caucasian faces was not significant. Thus, they concluded that the race issue was not related to country of origin.
  4. A fourth hypothesis is that encoding strategies an individual uses for own-race recognition are also employed for recognition of other-race faces. These strategies are often less than perfect when viewing faces of another race. Although there is evidence that individuals use different cues in asking someone to describe faces of own versus other race faces, there is no evidence that links cue utilization and the other-race effect.
  5. A fifth hypothesis is differential processing. Subjects use differential processing due to inferences and judgments made during initial viewing of own- and other-race faces. Related to this hypothesis is levels of processing (Craik & Lockhart, 1972). Faces of one's own race are processed deeply for possible character traits, whereas other-race faces are processed only superficially. The human face, however, is rich in perceptual information regardless of race. Part of the problem lies in perceived similarity of stimuli and elusiveness of a precise feature list used to describe a particular face. There appears to be no specific feature, but a group of features that defines a face of a given race, as has been noted in face recognition.

Traditionally, a face recognition task is used to examine the other-race effect (reviewed in Bothwell, Brigham, & Malpass, 1989). An initial set of target pictures of African American and Caucasian faces are shown to African American and Caucasian subjects. Then subjects are shown another set of faces including target pictures mixed randomly with a set of distracter pictures. Subjects are required to make an old judgment for pictures that are included in the target set and a new judgment for those faces that are not part of the target set. Subject responses are scored correct or incorrect, and recognition ability is assessed, for both correct and incorrect responses. Own-race bias exists if the recognition ability is greater for own-race pictures than for other-race pictures.

REFERENCES

Ayuk, R.E. (1990). Cross-racial identification of transformed, untransformed, and mixed- race faces. International Journal of Psychology, 25, 509-527.

Bothwell, R.K., Brigham, J.C., and Malpass, R.S. (1989). Cross-racial identification. Personality and Social Psychology Bulletin, 15, 19-25.

Chance, J.E., & Goldstein, A.G. (1996). The other-race effect and eyewitness identification. Psychological Issues in Eyewitness Identification (153-176) Sporer, S.L., Malpass, R.S., & Koehnken, G. eds. NJ: Lawrence Erlbaum.

Ng, W., & Lindsay, R.C.L. (1994). Cross-race facial recognition: Failure of the contact hypothesis. Journal of Cross-Cultural Psychology, 25, 217-232.

Yarmey, A.D., & Kent, J. (1980). Eyewitness identification by elderly and young adults.
Law and Behavior, 4, 359-371.

Death and Adjustment - The Hypothesis - Part - XIII

Philippe Aries has chronicled western attitudes towards death in five basic patterns from 5th to 20th century. The first pattern describes death as a peaceful sleep until the return of Christ followed by a non-threatening afterlife. The second pattern describes death as a supernatural event that is followed by judgment and results which was characterized by anxiety provoking afterlife. In the third pattern, death as a supernatural concept was converted to a natural concept. Anxiety was there with no anxiety about afterlife. The fourth pattern described death as something related to others, which is death of others with the emphasis on separation from others.

Finally the fifth pattern had death denied for individuals. If we consider the description of Aries in reverse manner, it gets to some extent similar to the stages described by Kubler-Ross. The final perception about death in western society described by Aries was denial which is described as the most immature stage of adjustment by Kubler-Ross. The 4th stage described by Aries resembles the 2nd stage of Kubler-Ross - anger, which is more mature than denial. The 3rd stage of Aries resembles 'bargaining' described by Kubler-Ross. The 2nd stage of Aries resembles best with the 4th stage of Kubler-Ross, and the 1st stage described by Aries resembles most with the 5th stage described by Kubler-Ross. So it seems like the personal adjustment procedure of the terminally ill or dying also matches the social changes of concept or attitude towards death in case of mass population.

The purpose of the above discussion is to give more legitimacy for the use of Kubler-Ross model in case of those that are not dying or terminally ill. Now let us start discussing about the 5 stages of adjustment for average people.

The first stage denial represents the situation when death is on conscious mind. So we must note first that there are situations in our practical life when death is somewhat dissociated which is a more severe form of isolation of any context from our conscious mind. Then, when the thought comes to our conscious mind we deny it which is an immature form of defense against any unacceptable truth. When and if this defense breaks down due to any situation anger comes about the truth of death which is also a sign of unacceptability and also unhealthy for an individual. When the anger subsides and the truth that death is inevitable flourishes more, one tries to bargain and adjust him or herself with the situation. But the next stage - depression indicates realization of helplessness and surrender to the truth. The final stage mentioned as acceptance can be explained by two themes. One is accepting death with the thought in mind that it will absolutely end oneself, and another is taking death as something that will not end one absolutely. The first one should accompany depression, as I described existence as our basic criteria. Thus ultimately it mismatches the change of stage from depression to acceptance. So I will take the second theme as the way of establishing establishment of acceptance.

From the above discussion we can see that the condensed or intense situation related to death reveals the overall situation of death for average individuals which ultimately takes us to the adjustment with death, though the extent of adjustment is undetected till now in this hypothesis. But what is very important, I believe, is that as the Kubler-Ross model describes adjustment with death when it is very much conscious in one's mind. But in the current trend in civilization I see death more than denied, which may be addressed as dissociated. Because there are very few people who remember death in their daily life like the terminally ills do. In fact they totally forget death and arrange their life as if there is no such thing as death, especially when they are healthy and average adults.

Finally in this part of the hypothesis I want to add an additional stage of perceiving death for any individual before the stage of denial and that I want to term as dissociation. So the traditional stages of adjustment with death for average healthy adults becomes, for this hypothesis at least, something like - 1) Dissociation, 2) Denial, 3) Anger, 4) Bargaining, 5) Depression, and 6) Adjustment. I will not advocate for the accuracy or perfection of this process for healthy adaptation, but I will surely assume that this trend is running in our society in a major portion of people. Hopefully in the following parts I will be able to highlight on the judgment of this running process and related issues.

Psychoterrestrial Hypothesis - The Illusion of Authority

Disclaimer: Once again I am putting out a disclaimer as both a reaffirmation to myself to keep myself in check and as a reminder to you, the reader, to not buy into anything I write or say or what anybody else writes or says for that matter, hook, line and sinker. The only authority by which I write is strictly experiential in that I am expounding and putting into words, which is difficult at best, my own personal viewpoint and findings. To take anything that anybody has put forth verbatim, as wrote is nothing less than a fool's path. That being said, this initial opening statement is also the premise for this next essay. I am writing only from my own limited view. I encourage all to take all such writing with a large grain of salt and do something rare and adventurous; think for yourself. Use only your own verifiable experiences and personal platforms of observation to go by. Please be brave enough to take off the sheep's clothing and begin to take responsibility for your own thoughts and courageously explore for yourself by yourself. Thank you for taking this little bit of advice into consideration. Now we can move on into just another idle aspect of my findings and observations.

Whom in the Realm of the Paranormal, that being anything in the field of study, such as it is, dealing with the anomalous, esoteric or unexplained, can wholly establish themselves as an authority of such?

According to the online dictionary The Free Dictionary.com, the following definition of the word "authority" is as follows:

au·thor·i·ty [uh-thawr-i-tee, uh-thor-] n., pl., -ties.

    1. The power to enforce laws, exact obedience, command, determine, or judge.
    2. One that is invested with this power, especially a government or body of government officials: land titles issued by the civil authority.
  1. Power assigned to another; authorization: Deputies were given authority to make arrests.
  2. A public agency or corporation with administrative powers in a specified field: a city transit authority.
    1. An accepted source of expert information or advice: a noted authority on birds; a reference book often cited as an authority.
    2. A quotation or citation from such a source: biblical authorities for a moral argument.
  3. Justification; grounds: On what authority do you make such a claim?
  4. A conclusive statement or decision that may be taken as a guide or precedent.
  5. Power to influence or persuade resulting from knowledge or experience: political observers who acquire authority with age.
  6. Confidence derived from experience or practice; firm self-assurance: played the sonata with authority.
In what aspect of the Paranormal gross or otherwise has anyone furnished any kind of realizable and compelling evidence to satisfy the definition as provided with any kind of indefatigable and unquestionable process?

Crickets

That's what I thought. There are none to satisfy this definition in any way by any means. This little exercise should help one right away as a sort of BS Meter to which they can apply it to any contextualization and conceptualization of any aspect of the Paranormal or Esoteric. Period.

So right away you may have readily applied this simple exercise to any number of situations, articles, books and lectures that may have popped into your mind and found you were left with the radical concept of absolutely nothing. Bupkis. Nada. Not a bad turn out for your investment for those who have purchased an extensive library of a sort concerning the Paranormal or any aspect of Esoterica in general.

Now before anybody gets upset and starts to bang the drum of revolt in retort, just know that I am not denigrating the experience. I am not discounting your finding. I am not cheapening the phenomena. On the contrary. I am doing you a favor. I am helping you to perhaps see the co-dependence you may have developed by seeking information about your experiences from another source other than yourself. Perhaps you have done exactly that and maybe some words and concepts were resonating quite well with you and so in some way you may now parrot this individuals finding to others as their explanation seems to trump your own to a point of personal satisfaction. But does it? Have you really applied the conceptualizations of someone else to your unexplainable situation? Let not this last question become an open discourse on Religious interpretation or Spiritual Doctrine. That is a whole other article in and of itself. All religious dogma aside, I challenge you to find anyone that knows more about your experiences than you. Tell me prithee, should you find them because that is a concept worthy again of its own discussion and discourse.

The problem with authority in any aspect is its dictates based on a singular viewpoint that tend to have a psychological impact at a societal level. In short, the squeaky wheel has made unto themselves a leader. Leaders generate followers. Followers, in my view, and please see the above disclaimer so as to not rake me over the proverbial coals, are seemingly individuals who are either afraid or are too lazy to either formulate or trust their own viewpoint. It's just easier to let someone else do all the work and have a "I will sacrifice some of my own sense of self and identify as "other than" and in turn be lead by this other than because I am OK being a non-thinking sheep" attitude.

I'm sorry. But as far as I know, meek does not mean intellectually submissive. You have a mind. Explore it. Use it. Try to find its limitations and smash them into oblivion. It's not as scary as it sounds. The idea of exploring within one's self to help alleviate the mysteries of without seems way more sound than letting an external source with a make-believe expertise crawl around in your head and tell you what you should think and feel. Unless one is suffering clearly from some sort of mental handicap, the burden of how you interpret reality in and around you is your sole responsibility and no one else's.

So just think now, and I am talking to you experiencers in particular, on all of your anomalous experiences and how you felt and what you saw and what you smelled and what you touched. How can any other person have more information about those experiences than you? This is ludicrous. They can't. All they can do is take your first hand account and now translate it with their own filtration processes based on their experiences and vomit back to you something that makes perfect sense TO THEM. Not you. This is an authority?

More and more I find all of reality to be completely subjective, maybe ultimately so and what is more, entirely of our own making, manufacture and manipulation. Manipulation being an especially operative word there. Why? Manipulation seems to be the realm leaders of followers. They have formed a clique based on more than likely, someone elses accepted idea of truth and so in effect they have become manipulated and therefore by proxy in turn manipulate. A like thing begets a like thing. Only a conscious act of actively engaging your environment and taking responsibility for it will take you out of the "The Land of the Lost".

I think some folks have manifested the idea of needing an authority figure in their life because of reasoning that may be twofold.

1) They have low self worth and don't trust themselves and oddly misplace their trust in another with no way to know what is going on in that person's head. To me, that is truly scary.

2) It is easier to point the finger of blame squarely at an external source when things go horribly wrong. You do not have to own the mistake if you are the follower. You are just a victim, right? WRONG!

I cannot strongly stress enough how important it is that you find your own answers within and when you do find them, trust them. They are you. They are for you only, and no one else. Share your kind actions and teach by example. Moving air around is just that. If it is something so important and you just can't keep it to yourself for whatever reason, whisper it. People listen more intently when the message is softly spoken.

But what do I know? I'm no authority.

Fractal Model of Consciousness - A Hypothesis of Non-Machine Or Sentient Consciousness

"O thou enwrapped arise, prophecy and thy lord magnify--"

Nanotechnical constraints may be readily invoked in support of the assertion that any manifold exhibits a "Planck scale" of elaboration of its small scalar structure beyond which its space is closed and its system's structure necessarily undefined. Such "Planck scale" represents a singular point with regard to such structure; a singularity being the point of failure of extension of manifold structure.

Now, on the question of nanotechnical objection to the theoretical notion of infinite scalar regressibility of manifold structure: Imagine yourself a transcendental, Wholly Other, non-dimensional point observer; a non-structured, zero-point consciousness observing the universe of your biological set manifold from a transcendent vantage point from which you may establish what is termed, non-set point logic with respect to the given universe of your set manifold. From your privileged divine perspective, there will be no essential nanotechnical constraints to the elaboration of the small scale of your bio-system's structure. The principle of infinite scalar regressibility of structure that you will take for granted from your privileged "divine" perspective will lead to the conclusion that any part of the manifold is equal to the whole and consequently that all appearances of small-large scale duality in the fragmentation of space structure is an illusion. That is, from a numinous, Wholly Other, zero-point frame of reference, the epistemic validity of set structural appearances in the dichotomy of the small and the large will be called to question, for from an absolute point self-frame of reference, set structure will appear wholly non-convergent and, consequently, set space will appear to have no essential "Planck scale" boundary.

Appreciate that the no-zero point axiom impacts on Einstein's supposed space-time continuum, breaking it up into an infinity of fractal shards all to the special effect of an essential ontological crisis with regard to set phenomena. That is, ontologically defined substantiality of set structure is contingent on a zero-point convergence of structure providing a continuum-al matrix-plenum in which the explosive fragmentation of structure may find inherence as a convolved whole.

The no-zero point axiom is, therefore, equivalent to a Buddhistic assertion of ontological emptiness (shunyata), insubstantiality (anacca), or selflessness (anatta) of set "things." From the perspective of a hypothetical non-structured consciousness, the physical world is a virtual reality system.

From the non-set point logic perspective of non-structured consciousness, ontologically defined reality or "being-ness" of a structure is absolute at zero-point convergence of such structure. Non-set point logic operation of infinite scalar regression of a set structure, therefore, represents a non-set logic demonstration of the futility of set system's quest for zero point ontological fulfillment by scalar regression of structure.

The non-set logic view that a set structure is non-convergent with respect to an absolute non-structured point implies that there is no smoothly graded evolutionary transitional pathway from non-being to being, for which reason, it is held that the large to small scale convergence of a set structure, in its transitions, represent not a hierarchy of levels of being with respect to set structure but a hierarchy of levels of non-being simulation of being.

Being is absolute at hypothetical zero-point of convergence of structure, and non-being absolute at any given non-zero point "convergence" of structure.

Thus, it is averred that "TO BE" and "NOT TO BE" is the absolute duality beyond all logical reconciliation.

Adaptive Markets Hypothesis (AMH) As an Alternative to the Efficient Market Hypothesis (EMH)

Efficient market hypothesis was introduced by French mathematician Louis Bachelier in 1900 in his dissertation. Efficient Market Hypothesis says that markets are completely efficient and that all prices in the market already reflect the available knowledge and expectations of investors.

This concept of market efficiency has a wonderfully counterintuitive and seemingly contradictory flavor to it: The more efficient the market, the more random the sequence of price changes generated by such a market must be, and the most efficient market of all is one which price changes are completely random and unpredictable. This is a direct outcome of many active traders using the smallest informational advantages at their disposal and attempting to profit from their information. This quickly eliminates the profit opportunities that give rise to their actions.

If we assume an idealized world of "frictionless" markets and costless trading, then prices must always fully reflect all available information and no profits can be garnered from information-based trading because such profits have already been captured.

The Efficient Market Hypothesis (EMH) is particularly relevant to the Alternative Investments Industry because the primary attraction of Alternative Investments Products is their higher expected returns and, in many cases, lower risk as measured by correlation to broad-based market index such as the S & P 500 and the Dow Jones.

If the Efficient Market Hypothesis (EMH) is true, it should not be possible to generate higher expected returns after adjusting for risk.

Let us look at the Capital Asset Pricing Model (CAPM).

According to this model the risk-adjusted expected return of any investment "p" is determined by the market beta of that investment:

E[RP]= RF + ß(E[Rm]- RF),

If we transposed for ß in percentage, we get:

ß = {E [RP] - RF)}/ {E [RM]-RF} X100

Where:

RF = the return on a risk free asset such as 10 Year U.S Treasury Bills

E [RM] = is the expected return on the market portfolio which is normally approximated by the S& P 500.

If we test a sample of data of say 10 different alternative investments products you will find that a number of them exhibit excess expected returns, which implies that the models are wrong or that the markets are inefficient.

MIT finance professor Andrew Lo is turning heads by developing a new theory about the way that markets behave called the Adaptive Markets Hypothesis (AMH). His resulting Adaptive Markets Hypothesis (AMH) explains the apparent irrationality of markets as a rational reaction to a change in environmental conditions.

I do believe in the adaptive markets hypothesis as an alternative to the Efficient Market Hypothesis (EMH). It is a better model.

The Adaptive Markets Hypothesis (AMH) is based on principles of evolutionary biology such as competition, mutation, reproduction, and natural selection.

It is a theory that expectations about market conditions are based in somewhat imperfect perceptions of how recent events might affect the future, and with less than perfect rationality.

Despite the qualitative nature of this new paradigm, the Adaptive Markets Hypothesis offers a number of surprisingly concrete implications for the practice of portfolio management. Based on evolutionary principles, the Adaptive Markets Hypothesis implies that the degree of market efficiency is related to environmental factors characterizing market ecology such as the number of competitors in the market, the magnitude of profit opportunities available, and the adaptability of the market participants. According to Andrew W. Lo, the Adaptive Markets Hypothesis can be viewed as a new version of the efficient market hypothesis, derived from evolutionary principles.

The AMH paradigm views markets as ecological systems in which different groups or "species" compete for scarce resources. The system will tend to exhibit cycles in which competition depletes existing resources (trading opportunities), but new opportunities then appear.

Under the Adaptive Markets Hypothesis (AMH) investment strategies will under go cycles of profitability and loss in response to changing business conditions, the number of competitors entering and exiting the industry, and the type and magnitude of profit opportunities available. As opportunities shift, the affected populations will also shift.

The AMH has a number of concrete implications for the alternative investment industry. The first implication is that contrary to the classical EMH, arbitrage opportunities do exist from time to time in the AMH, for without such opportunities there would be no incentive to gather information, and price discovery aspect of financial markets would collapse.

As long as there is an active liquid financial markets there will be profit opportunities. However as traders exploit them, they will disappear. I believed that new opportunities are continually being created as traders die out, as others are born, and as institutions and business conditions change.

In the past decades, the efficient market hypothesis, which had been near dogma since the early 1970s, has taken some serious body blows and will continue to do so.

The Adaptive Markets Hypothesis (AMH) is a much better model. Rather than the inexorable trend towards higher efficiency predicted by the EMH, the AMH implies considerably more complex market dynamics, with cycles as well as trends, panics, manias, bubbles, crashes, and other phenomena routinely witnessed in natural market ecologies

The second implication is that trading strategies also wax and wine, performing well under certain market conditions and performing poorly in order market conditions

The third implication is that innovation is the key to survival

Finally, the AMH has a clear implication for all financial market participants:

Survival is the only objective that matters. While profit maximization, utility maximization, and general equilibrium are certainly relevant aspects of the market ecology, the organizing principle in determining the evolution of markets and financial technology is simply survival

Let us now look at three prediction of the AMH:

Profit opportunities will generally exist in financial markets.

The forces of learning and competition will gradually erode these profit opportunities

More complex strategies will persist longer than simple ones.

According to Wikipedia:

The AMH has several implications that differentiate it from the EMH such as:

  1. To the extent that a relation between risk and reward exists, it is unlikely to be stable over time.
  2. Contrary to the classical EMH, arbitrage opportunities do exist from time to time.
  3. Investment strategies will also wax and wane, performing well in certain environments and performing poorly in other environments. This includes quantitatively-, fundamentally- and technically-based methods.
  4. Survival is the only objective that matters while profit and utility maximization are secondary relevant aspects
  5. Innovation is the key to survival because as risk/reward relation varies through time, the better way of achieving a consistent level of expected returns is to adapt to changing market conditions.

How to Make a Hypothesis For a Chemistry Set Science Kit Experiment

So you have a beautiful brand new science kit, such as a chemistry set, and you want to set up a truly scientific experiment, something really professional, something tightly organized and keenly observed. Sounds like a great idea so far! So where do you begin?

The first step, perhaps the most important step, is a well-thought-out hypothesis. This article provides instructions for how to make a great hypothesis.

A hypothesis is just a question and what you think the answer is. It's been called an "educated guess." To write a good one, keep two principles in mind: your hypothesis should be precise and it should be simple. It's usually written as an "If...then..." statement.

Contrary to what you may believe, most science kit experiments are carried out with a pretty good idea of what will happen. The goal of the experiment is to confirm that idea. And the name of that idea is the hypothesis.

So, if you look at your chemistry set or science kit sitting there with its brand-new bottles and think to yourself, "I'll bet if I combine the ammonium nitrate with the water it will get colder, that's what happens in those cold packs," well, you've got a basic hypothesis right there!

If you further start thinking and wondering, "I wonder what would happen if I added a whole bunch of ammonium nitrate to water. Would it get colder faster? Would it drop to an even lower temperature? How do they measure the right amounts to put in those cold pack things?" then you are really thinking like a scientist!

You can expand your hypothesis to read something like the following, "This experiment will measure temperature effects across time from varying amounts of ammonium nitrate dissolved in water. Hypothesis: If a greater amount of ammonium nitrate is added to water, the temperature of the solution will drop faster, and the greater the amount of ammonium nitrate added to water, the lower the end temperature will be before stabilizing."

You will notice that the hypothesis is very precise, it states exactly and with no fuzziness just what the experimenter will be measuring and what he expects the results to be. A poor hypothesis would be the following, "Hypothesis: Adding ammonium nitrate will make the water colder." It is not at all precise. Colder than what? It's not simply water after you add ammonium nitrate is it? It's a solution. What do you mean by "colder"? How are you measuring this? The above hypothesis answers all these questions with exactness.

You will also notice that the original hypothesis is very simple. It uses as few words as possible. A poor hypothesis would be the following, "When I add greater amounts of ammonium nitrate from the chemistry set to the water to make a solution like in a cold-pack from the store, then measure the temperature as described, I expect to see the numbers go down quicker than they would with a small amount of ammonium nitrate. I also think there will be a point that the temperature stops dropping and levels off, but I think that point will be lower for larger amounts of ammonium nitrate." The original hypothesis keeps things very simple.

A good hypothesis guides your experiment. Every observation is taken with an eye to disproving that hypothesis. Yes, you heard right DISproving the hypothesis. A good scientist knows that the best way to prove the hypothesis is right is by trying to prove it is wrong.

A good scientist is very, very careful and critical at each stage of the experiment, recording exactly what happens and noticing every detail that could potentially be impacting the results and disproving the hypothesis. A good scientist carefully repeats trials and reanalyzes data looking vigilantly for flaws. A good scientist uses all of the materials available in the science kit to test the hypothesis. In the end, if the results still match his hypothesis, then and only then can he begin to say it might be true. A good scientist still wants to see that this success is repeatable, so he may run the whole experiment again at another date, or ask a fellow scientist to do so.

If the results do not support the hypothesis, then the scientist has really learned something! Is it time to get a new chemistry set because this one doesn't give you the results you were looking for? No, that is not the right conclusion. This is where the most interesting part of science comes in, follow-up investigative experiments. The hypothesis is just your best guess, so you don't really know whether or not it is true. This is where a science kit begins to have all the thrill of a detective novel as you the scientist carefully watch for clues, racks your brain for alternative explanations and likely culprits, or devise plans to follow up a hunch. In which case, you get to write another hypothesis!

Efficient Market Hypothesis - Can You Beat The Market?

The Efficient Market Hypothesis, or EMH, is a concept developed by Eugene Fama, which asserts that the prices of financial instruments, reflect all known information about the future values and beliefs of the investors about those particular financial instruments. This means that you, the investor, cannot outperform the market in the long run.

Here is a quick example: Suppose you watch the evening news, and there is a report of how High-Tech companies are getting more popular. You think to yourself it will be a good investment, and you decide to buy stocks of High-Tech companies. Will you outperform everybody? probably not. What knowledge you have about the stock market is available to each and every individual. This means that you are not the only one that noticed that High-Tech companies are getting more popular and hence, these beliefs are already reflected in the price!

This may mean the price is higher if it were not for that particular belief. Please note that this does not mean that every idea and notion you may have about the market is already reflected in the price. You could predict something that nobody even considered, but that would be luck! You could also think you know how a particular sector or stock would perform and not get it right. This is what happens most of the time. Thus, in the long run, the prices of the financial instrument already reflect the consensus of beliefs about the particular stock/bond/etc and it's future performance.

The EMH is usually divided into three categories: The first is called the Weak Form Efficiency. This means that you cannot outperform the market in the long run by using historical prices. This includes Technical Analysis and other statistical methods. For example, You may analyze past prices and see that on a Monday, stocks usually rise by 2%. You think you have a good shot at making some money out of it. The problem is that everybody has access to past prices and everybody would try to make money that way and hence the price would already reflect the use of those tools. This may be debatable and some have showed that there are statistical anomalies that have existed over a long period of time (although most have faded away by now).

The second category is the Semi-Strong Efficiency. This means that the prices reflect all known information and news about it, which means that fundamental analysis cannot help you in analyzing the value of the stock because yet again, everybody has access to the news and information.

The last category is, of course, the Strong Efficiency, which means that the prices reflect ALL information and no one can earn excess return. Of course, that would have to take into account, the legal issues of insider trading. For those who do not know, insider trading means that someone INSIDE the company, that has knowledge that the public does not know yet (such as a big merge), uses it for personal gains. If insider trading is not legal, than you could, and people had, outperform the market. Insider trading is very lucrative and that is why each year you can read the news about someone being accused of it.

You could argue and say that you know several stars and hedge funds that outperform the market but do realize this: there are more that fail and the percentage that is consistently outperforming is VERY small. So small in fact, that this be may attributed to sheer luck.

So how efficient are the markets? The bigger the market the more efficient it is usually as more and more people try to use information to help them gain money and outperform it.

The Israeli Speculator is currently pursuing a Master's degree in Financial Engineering and has been involved in investing and the finance community for 5 years.

Critical period hypothesis

The critical period hypothesis (CPH) refers to a long-standing debate in linguistics and language acquisition over the extent to which the ability to acquire language is biologically linked to age. The hypothesis claims that there is an ideal 'window' of time to acquire language in a linguistically rich environment, after which this is no longer possible due to changes in the brain. The hypothesis has been discussed in the context of both first (FLA) and second language acquisition (SLA), and is particularly controversial in the latter. In FLA, it seeks to explain the apparent absence of language in individuals whose childhood exposure was very limited, and in SLA it is often invoked to explain variation in adults' performance in learning a second language, which is very often observed to fall short of nativelike attainment. Various ages have been proposed for the supposed end of the CPH; those that point to pre-adolescent ages such as 12 have been vulnerable to alternative theories which invoke psychological or social factors applying as children move into adolescence
History
The critical period hypothesis is associated with Wilder Penfield, whose 1956 Vanuxem lectures at Princeton University formed the basis of his 1959 work with Lamar Roberts, Speech and Brain Mechanisms. Penfield and Roberts explored the neuroscience of language, concluding that it was dominant in the left hemisphere of the brain on the basis of hundreds of case studies spanning many decades. The review focussed on how individuals with brain damage evidenced atypical linguistic performance, rather than examining neurotypical cases of 'normal' language acquisition, and the authors' conclusions were also based on the prevailing tabula rasa view that children were born without any real innate language ability; however, linguistic "units", once "fixed", would affect later learning.[1] Their recommendations for language schooling recommended starting early in order to avoid fixed effects; though these claims did not form the core of the book, being confined to the last chapter, other researchers and popular opinion were much-influenced by them. The hypothesis was developed by Eric Lenneberg in his 1967 Biological Foundations of Language, which set the end of the critical period for native language acquisition at 12. The hypothesis has been fiercely debated since then, and has continued to inform popular assumptions about the presumed (in)ability of adults to fluently learn a second language.
In SLA, a weaker version of the CPH emerged in the 1970s. This refers to a sensitive period in which nativelike performance is unlikely but not ruled out.[2] The strongest evidence for the CPH is in the study of accent, where most older learners seem not to reach a native-like level. This leads some researchers to apply the CPH only to second language phonology rather than all aspects of language; indeed, a CPH was not seriously considered for syntax until the 1990s, in research that remains a minority view.[3] However, under certain conditions, native-like accent has been observed, suggesting that accent in SLA is affected by multiple factors, such as identity and motivation, rather than a biological constraint.[4]
Children without language
The CPH as applied to first language acquisition proposes that a child deprived of exposure to natural language would fail to acquire it if exposure commenced only after the end of the critical period. Because testing such a theory would be unethical, in that it would involve isolating a child from the rest of the world for several years, researchers have gathered evidence of the CPH from a few victims of child abuse. The most famous example is the case of Genie (a pseudonym), who was deprived of language until the age of 13. Over the following years of rehabilitation, improvement in her ability to communicate was noted, but during this time she did not develop the language ability common to other children.[5] However, this case has been criticised as a firm example of the critical period in action, and data has not been gathered from Genie since the 1970s.[6]
Although there are several cases on record of deaf children being deprived of sign language, this could also count as abuse. One case in which no abuse took place is that of Chelsea, whose deafness was left undiagnosed until the age of 31. Once hearing aids had apparently restored her hearing to near-normal levels, she seemed to develop a large vocabulary while her phonology and syntax remained at a very low level.[7] The implications of this have been disputed, given the apparently unlikely circumstances of Chelsea's diagnosis.[8]

Old Theory of Phytoplankton Growth Overturned, Raise Concerns for Ocean Productivity

A new study concludes that an old, fundamental and widely accepted theory of how and why phytoplankton bloom in the oceans is incorrect.See Also:
The findings challenge more than 50 years of conventional wisdom about the growth of phytoplankton, which are the ultimate basis for almost all ocean life and major fisheries. And they also raise concerns that global warming, rather than stimulating ocean productivity, may actually curtail it in some places.
This analysis was published in the journal Ecology by Michael Behrenfeld, a professor of botany at Oregon State University, and one of the world's leading experts in the use of remote sensing technology to examine ocean productivity. The study was supported by NASA.
The new research concludes that a theory first developed in 1953 called the "critical depth hypothesis" offers an incomplete and inaccurate explanation for summer phytoplankton blooms that have been observed since the 1800s in the North Atlantic Ocean. These blooms provide the basis for one of the world's most productive fisheries.
"The old theory made common sense and seemed to explain what people were seeing," Behrenfeld said.
"It was based on the best science and data that were available at the time, most of which was obtained during the calmer seasons of late spring and early summer," he said. "But now we have satellite remote sensing technology that provides us with a much more comprehensive view of the oceans on literally a daily basis. And those data strongly contradict the critical depth hypothesis."
That hypothesis, commonly found in oceanographic textbooks, stated that phytoplankton bloom in temperate oceans in the spring because of improving light conditions -- longer and brighter days -- and warming of the surface layer. Warm water is less dense than cold water, so springtime warming creates a surface layer that essentially "floats" on top of the cold water below, slows wind-driven mixing and holds the phytoplankton in the sunlit upper layer more of the time, letting them grow faster.
There's a problem: a nine-year analysis of satellite records of chlorophyll and carbon data indicate that this long-held hypothesis is not true. The rate of phytoplankton accumulation actually begins to surge during the middle of winter, the coldest, darkest time of year.
The fundamental flaw of the previous theory, Behrenfeld said, is that it didn't adequately account for seasonal changes in the activity of the zooplankton -- very tiny marine animals -- in particular their feeding rate on the phytoplankton.
"To understand phytoplankton abundance, we've been paying way too much attention to phytoplankton growth and way too little attention to loss rates, particularly consumption by zooplankton," Behrenfeld said. "When zooplankton are abundant and can find food, they eat phytoplankton almost as fast as it grows."
The new theory that Behrenfeld has developed, called the "dilution-recoupling hypothesis," suggests that the spring bloom depends on processes occurring earlier in the fall and winter. As winter storms become more frequent and intense, the biologically-rich surface layer mixes with cold, almost clear and lifeless water from deeper levels. This dilutes the concentration of phytoplankton and zooplankton, making it more difficult for the zooplankton to find the phytoplankton and eat them -- so more phytoplankton survive and populations begin to increase during the dark, cold days of winter.
In the spring, storms subside and the phytoplankton and zooplankton are no longer regularly diluted. Zooplankton find their prey more easily as the concentration of phytoplankton rises. So even though the phytoplankton get more light and their growth rate increases, the voracious feeding of the zooplankton keeps them largely in-check, and the overall rise in phytoplankton occurs at roughly the same rate from winter to late spring. Eventually in mid-summer, the phytoplankton run out of nutrients and the now abundant zooplankton easily overtake them, and the bloom ends with a rapid crash.
"What the satellite data appear to be telling us is that the physical mixing of water has as much or more to do with the success of the bloom as does the rate of phytoplankton photosynthesis," Behrenfeld said. "Big blooms appear to require deeper wintertime mixing."
That's a concern, he said, because with further global warming, many ocean regions are expected to become warmer and more stratified. In places where this process is operating -- which includes the North Atlantic, western North Pacific, and Southern Ocean around Antarctica -- that could lead to lower phytoplankton growth and less overall ocean productivity, less life in the oceans. These forces also affect carbon balances in the oceans, and an accurate understanding of them is needed for use in global climate models.
Worth noting, Behrenfeld said, is that some of these regions with large seasonal phytoplankton blooms are among the world's most dynamic fisheries.
The critical depth hypothesis would suggest that a warmer climate would increase ocean productivity. Behrenfeld's new hypothesis suggests the opposite.
Behrenfeld said that oceans are very complex, water mixing and currents can be affected by various forces, and more research and observation will be needed to fully understand potential future impacts. However, some oceanographers will need to go back to the drawing board.
"With the satellite record of net population growth rates in the North Atlantic, we can now dismiss the critical depth hypothesis as a valid explanation for bloom initiation," he wrote in the report.