Chance of Error
Mistakes
When you choose between alternatives, you want to choose correctly and avoid a mistake. When mistakes have consequences, making an error can be costly.

Rejecting the Null Hypothesis
When you test hypotheses, you are gathering evidence to reject a null hypothesis that a factor has no effect. Evidence is obtained (data are collected) that seeks to "prove" that the factor does have an effect.

Positives and Negatives
If the evidence is strong enough, you have a positive result, i.e., you have demonstrated that the factor has an effect, and you reject the null hypothesis. If the evidence is not strong enough, you have a negative result and you are not justified in rejecting the null hypothesis.

If you make a Type I error by rejecting a true null hypothesis, you have a false positive - the positive result occurred by chance and not because the factor had the effect it appeared to have. If you make a Type II error on the basis of a false negative, you mistakenly stick with your null hypothesis when it is an inaccurate explanation of the situation.

Strength of the Evidence
The evidence should be strong enough to avoid a costly mistake.

Drugs
When the drug companies developed Vytorin and asked for FDA approval to market it, they needed to demonstrate that it was effective in treating heart disease. If the drug won approval, doctors would change the way they had been treating patients. Patients would change the way they lived, suffer a suite of side effects from the drug, and pay much more money than before. From the standpoint of the public, the evidence had to be strong. In statistical terms, strength would mean that positive results of the drug tests could not have occurred by chance alone except in very few instances. In other words, the chance of a false positive (the drug appeared to have an effect, but didn't) needed to be small. The convention is to reject the null hypothesis if the evidence could have occurred in no more than 5 chances out of 100 (or 0.05). When the cost of a false positive is high, then perhaps the chance of a getting a false positive should be 0.01 instead.

air disaster:http://www.defraudingamerica.com/media_index.htmlAir Travel
The TSA screens passengers boarding flights in an attempt to obtain evidence that each one is trying to blow up the airplane (rejecting the null hypothesis that the passenger is harmless). False negatives (the passenger passes through the screening but really has a bomb) would be a disaster, so TSA agents should be thorough in gathering data. False positives result in additional testing, are inconvenient, but tolerated for safety's sake. Since the probabilities of Type I and Type II errors are indirectly related, if the cost of a false negative is high, then perhaps the chance of a getting a false positive should be relaxed to 0.10 instead.

Knowledge is Power
The world is full of uncertainties and some mistakes are worse than others. If you can accurately predict the cost of mistakes, then you can minimize your chance of making them by adjusting your testing. You can lower the chances of both Type I and Type II errors by collecting more evidence if you have the resources to do so.
References: Type I and type II errors