

04172018, 06:16 PM


This is the title that appears beneath your name on your posts.


Join Date: Jun 2005
Gender: Male


Re: A revolution in thought
Quote:
Originally Posted by peacegirl
No no, you don't get to call my father names and expect me to do business as usual. Ain't gonna happen.

Why are the images in the same place when they should be shifted? Figured it out yet?
Why do dogs consistently react differently to pictures of people they know compared to strangers?

Yesterday, 03:34 PM


Member


Join Date: Mar 2011
Location: U.S.A.
Gender: Female


Re: A revolution in thought
[quote=But;1310913]
Quote:
Originally Posted by peacegirl
No no, you don't get to call my father names and expect me to do business as usual. Ain't gonna happen.

Quote:
Originally Posted by But
Why are the images in the same place when they should be shifted? Figured it out yet?

You mean parallax? I don't see where this phenomenon does anything to disprove real time vision.
Quote:
Originally Posted by But
Why do dogs consistently react differently to pictures of people they know compared to strangers?

I don't think they do. Statistical significance is sketchy. You say it's been replicated? Where? In real life, have you ever seen a dog (who misses his master) look at a picture and show signs of recognition? Have you ever seen a dog run toward a photo of his master in preference to a stranger? You could do this at home. Observation counts, but you are myopically seeing the results you want to be true (this goes for the experimenters too). This is confirmation bias at its best.
__________________
"We will not solve the problems of the world from the level of thinking we were at when we created them"  Einstein
"The fatal tendency of mankind to leave off thinking about a thing
which is no longer doubtful is the cause of half their errors"  John Stuart Mill

Yesterday, 04:17 PM


liar in wolf's clothing


Join Date: Feb 2005
Location: Frequently about


Re: A revolution in thought
Quote:
Originally Posted by peacegirl
You're right. I went back and saw that I had put it in earlier. I was mistaken. After some changes (due to constructive criticism), I lost track of when I made a particular revision.

Yes, peacegirl, I know: you get confused. You can no longer remember which words belong to the Author, which things you simply made up and inserted, and which things you changed or deleted to suit your own agenda. You could not make your Corrupted Text less Corrupt or fraudulent even if you wanted to, because of your confusion. You do not even recognize the Authentic Text when you see it, such is the magnitude of the Corruption you have done.
But I, the True Steward of the Authentic Text, have an undeniable and scientific way to deconfuse what you have sown: simply reject your Corrupted Text. Do not reward peacegirl's pursuit of lucre by paying $41.00 for her Corrupted Text, filled as it is with her lies. Join us in interpreting the Authentic Text as written by the Author and published in his lifetime.

Yesterday, 04:59 PM


This is the title that appears beneath your name on your posts.


Join Date: Jun 2005
Gender: Male


Re: A revolution in thought
Quote:
Originally Posted by peacegirl
Quote:
Originally Posted by But
Why are the images in the same place when they should be shifted? Figured it out yet?

You mean parallax? I don't see where this phenomenon does anything to disprove real time vision.

It rules out your claim that the supernovae could be closer than we think. If they were closer than about a thousand light years we would know the distance using parallax and there is no way for you to argue around that.
Quote:
Quote:
Originally Posted by But
Why do dogs consistently react differently to pictures of people they know compared to strangers?

I don't think they do. Statistical significance is sketchy.

No, it isn't, and you would know that if you had any idea how it works. Prove me wrong. Take the article I sent you, look up their pvalue and tell me what the probability is that they would get their results by chance when their hypothesis was false.
Quote:
You say it's been replicated? Where?

There was quite a list of experiments by different groups and the results were all consistent.
Quote:
In real life, have you ever seen a dog (who misses his master) look at a picture and show signs of recognition? Have you ever seen a dog run toward a photo of his master in preference to a stranger? You could do this at home. Observation counts, but you are myopically seeing the results you want to be true (this goes for the experimenters too). This is confirmation bias at its best.

Of course you could do it at home, but you are totally clueless about how you do a controlled experiment.

Yesterday, 05:58 PM


Member


Join Date: Mar 2011
Location: U.S.A.
Gender: Female


Re: A revolution in thought
Quote:
Originally Posted by But
Quote:
Originally Posted by peacegirl
Quote:
Originally Posted by But
Why are the images in the same place when they should be shifted? Figured it out yet?

You mean parallax? I don't see where this phenomenon does anything to disprove real time vision.

It rules out your claim that the supernovae could be closer than we think. If they were closer than about a thousand light years we would know the distance using parallax and there is no way for you to argue around that.

Maybe it isn't closer, but larger. Absolute brightness and apparent brightness are ways to calculate distance, but this does not prove we see in delayed time.
Quote:
Quote:
Originally Posted by But
Why do dogs consistently react differently to pictures of people they know compared to strangers?

I don't think they do. Statistical significance is sketchy.

Quote:
Originally Posted by But
No, it isn't, and you would know that if you had any idea how it works. Prove me wrong. Take the article I sent you, look up their pvalue and tell me what the probability is that they would get their results by chance when their hypothesis was false.

I'm sorry, but you can't get that kind of accuracy from one experiment. It would have to be replicated many times over to even consider the possibility that a dog could distinguish his master from a stranger. It's interesting to note that what the experimenter expects to see (based on his strong bias) can influence how the results are interpreted.
Quote:
You say it's been replicated? Where?

Quote:
Originally Posted by But
There was quite a list of experiments by different groups and the results were all consistent.

The ones I've seen do not show the kind of recognition that is relevant to this conversation. They may show a dog picking out other four legged animals but not recognizing familiar human faces. Obviously, you don't think observation counts for anything. Why, in the video I posted, did the dog fail to recognize his master after 2 years of being lost and depressed, but when he smelled him, it was instant recognition? Are you telling me that counts for nothing?
Quote:
In real life, have you ever seen a dog (who misses his master) look at a picture and show signs of recognition? Have you ever seen a dog run toward a photo of his master in preference to a stranger? You could do this at home. Observation counts, but you are myopically seeing the results you want to be true (this goes for the experimenters too). This is confirmation bias at its best.

Quote:
Originally Posted by But
Of course you could do it at home, but you are totally clueless about how you do a controlled experiment.

Plenty of scientific studies are retracted due misconduct, design flaw, or plain old bias.
Retraction of scientific papers for fraud or bias is just the tip of the iceberg
__________________
"We will not solve the problems of the world from the level of thinking we were at when we created them"  Einstein
"The fatal tendency of mankind to leave off thinking about a thing
which is no longer doubtful is the cause of half their errors"  John Stuart Mill

Yesterday, 06:07 PM


liar in wolf's clothing


Join Date: Feb 2005
Location: Frequently about


Re: A revolution in thought
Quote:
Originally Posted by peacegirl
The ones I've seen do not show the kind of recognition that is relevant to this conversation. They may show a dog picking out other four legged animals but not recognizing familiar human faces.

Chalk up another one for the True Steward of the Authentic Text:
Quote:
Originally Posted by ChuckF
See, the key lay in a basic undeniable mathematical fact of peacegirl's nature: if you give her something to read, she will only understand it to the extent that it confirms whatever she already believes. Any other content or meaning or new information is completely invisible to her, no matter how simply it is expressed, or how plainly evident it is to any other reader.

(also that just reminded me how hilarious it was when you accused me of piracy for linking to your website)
Quote:
Originally Posted by peacegirl
Obviously, you don't think observation counts for anything. Why, in the video I posted, did the dog fail to recognize his master after 2 years of being lost and depressed, but when he smelled him, it was instant recognition? Are you telling me that counts for nothing?

Yep.
Quote:
Originally Posted by peacegirl
Plenty of scientific studies are retracted due misconduct, design flaw, or plain old bias.

Like fraudulent quack Andy Wakefield, you mean?

Yesterday, 06:08 PM


This is the title that appears beneath your name on your posts.


Join Date: Jun 2005
Gender: Male


Re: A revolution in thought
Quote:
Originally Posted by peacegirl
Maybe it isn't closer, but larger. Absolute brightness and apparent brightness are ways to calculate distance, but this does not prove we see in delayed time.

Parallax has nothing to do with brightness. Look it up, don't cut and paste, and explain it in your own words.

Yesterday, 06:16 PM


This is the title that appears beneath your name on your posts.


Join Date: Jun 2005
Gender: Male


Re: A revolution in thought
Quote:
Originally Posted by peacegirl
Quote:
Originally Posted by But
No, it isn't, and you would know that if you had any idea how it works. Prove me wrong. Take the article I sent you, look up their pvalue and tell me what the probability is that they would get their results by chance when their hypothesis was false.

I'm sorry, but you can't get that kind of accuracy from one experiment. It would have to be replicated many times over to even consider the possibility that a dog could distinguish his master from a stranger. It's interesting to note that what the experimenter expects to see (based on his strong bias) can influence how the results are interpreted.

That's not what I asked. Take one of those experiments, there's a number called the pvalue, which tells you what the significance of those results is. Look at that number and tell me what the probability is that you would get the result by chance assuming their hypothesis was false.

Yesterday, 07:36 PM


Member


Join Date: Mar 2011
Location: U.S.A.
Gender: Female


Re: A revolution in thought
Quote:
Originally Posted by But
Quote:
Originally Posted by peacegirl
Quote:
Originally Posted by But
No, it isn't, and you would know that if you had any idea how it works. Prove me wrong. Take the article I sent you, look up their pvalue and tell me what the probability is that they would get their results by chance when their hypothesis was false.

I'm sorry, but you can't get that kind of accuracy from one experiment. It would have to be replicated many times over to even consider the possibility that a dog could distinguish his master from a stranger. It's interesting to note that what the experimenter expects to see (based on his strong bias) can influence how the results are interpreted.

That's not what I asked. Take one of those experiments, there's a number called the pvalue, which tells you what the significance of those results is. Look at that number and tell me what the probability is that you would get the result by chance assuming their hypothesis was false.

I don't know what the exact p value would be. Please share. They say, for an experiment to be statistically significant replication is very important, as well as sample size. The design of the experiment is also extremely important.
Experimentation
An experiment deliberately imposes a treatment on a group of objects or subjects in the interest of observing the response. This differs from an observational study, which involves collecting and analyzing data without changing existing conditions. Because the validity of a experiment is directly affected by its construction and execution, attention to experimental design is extremely important.
Treatment
In experiments, a treatment is something that researchers administer to experimental units. For example, a corn field is divided into four, each part is 'treated' with a different fertiliser to see which produces the most corn; a teacher practices different teaching methods on different groups in her class to see which yields the best results; a doctor treats a patient with a skin condition with different creams to see which is most effective. Treatments are administered to experimental units by 'level', where level implies amount or magnitude. For example, if the experimental units were given 5mg, 10mg, 15mg of a medication, those amounts would be three levels of the treatment.
(Definition taken from Valerie J. Easton and John H. McColl's Statistics Glossary v1.1)
Factor
A factor of an experiment is a controlled independent variable; a variable whose levels are set by the experimenter.
A factor is a general type or category of treatments. Different treatments constitute different levels of a factor. For example, three different groups of runners are subjected to different training methods. The runners are the experimental units, the training methods, the treatments, where the three types of training methods constitute three levels of the factor 'type of training'.
(Definition taken from Valerie J. Easton and John H. McColl's Statistics Glossary v1.1)
Experimental Design
So I will ask you again: Why are so many experiments shown to be misleading?
All hypothesis tests ultimately use a pvalue to weigh the strength of the evidence (what the data are telling you about the population). The pvalue is a number between 0 and 1 and interpreted in the following way:
A small pvalue (typically ≤ 0.05) indicates strong evidence against the null hypothesis, so you reject the null hypothesis.
A large pvalue (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis.
pvalues very close to the cutoff (0.05) are considered to be marginal (could go either way). Always report the pvalue so your readers can draw their own conclusions.
What a pValue Tells You about Statistical Data  dummies
Statistical Significance Abuse
A lot of research makes scientific evidence seem more “significant” than it is
updated Sep 15, 2016 (first published 2011)
by Paul Ingraham, Vancouver, Canada bio
This article is about two common problems with “statistical significance” in medical research. Both problems are particularly rampant in the study of massage therapy, chiropractic, and alternative medicine in general, and are wonderful examples of why science is hard, “why most published research findings are false”1 and genuine robust treatment effects are rare:2
mixing up statistical and clinical significance and the probability of being “right”
reporting statistical significance of the wrong dang thing
cont. at: Statistical Significance Abuse
__________________
"We will not solve the problems of the world from the level of thinking we were at when we created them"  Einstein
"The fatal tendency of mankind to leave off thinking about a thing
which is no longer doubtful is the cause of half their errors"  John Stuart Mill
Last edited by peacegirl; Yesterday at 07:51 PM.

Yesterday, 07:46 PM


This is the title that appears beneath your name on your posts.


Join Date: Jun 2005
Gender: Male


Re: A revolution in thought
That's not what I asked. Take one of those experiments, there's a number called the pvalue, which tells you what the significance of those results is. Look at that number and tell me what the probability is that you would get the result by chance assuming their hypothesis was false.
No cut and paste.

Yesterday, 08:00 PM


Member


Join Date: Mar 2011
Location: U.S.A.
Gender: Female


Re: A revolution in thought
Quote:
Originally Posted by But
That's not what I asked. Take one of those experiments, there's a number called the pvalue, which tells you what the significance of those results is. Look at that number and tell me what the probability is that you would get the result by chance assuming their hypothesis was false.
No cut and paste.

Don't tell me how to answer you. I am asking you to find the pvalue of those experiments. Even with those values showing beyond mere chance, the statistical significance (the formula they use to derive at the pvalue) does not prove that the results are not mere chance unless the experiment is well designed (the experiments you posted weren't even testing for the right thing), a large sample size, and replicated many times with the same results.
Research can be statistically significant, but otherwise unimportant. Statistical significance means that data signifies something… not that it actually matters.
Statistical significance on its own is the sound of one hand clapping. But researchers often focus on the the positive: “Hey, we’ve got statistical significance! Maybe!” So they summarize their findings as “significant” without telling us the size of the effect they observed, which is a little devious or sloppy. Almost everyone is fooled by this — except 98% of statisticians — because the word “significant” carries so much weight. It really sounds like a big deal, like good news.
But it’s like bragging about winning a lottery without mentioning that you only won $25.
Statistical significance without other information really doesn’t mean all that much. It is not only possible but common to have clinically trivial results that are nonetheless statistically significant. How much is that statistical significance worth? It depends … on details that are routinely omitted.
Which is convenient if you’re pushing a pet theory, isn’t it?
Statistical Significance Abuse
__________________
"We will not solve the problems of the world from the level of thinking we were at when we created them"  Einstein
"The fatal tendency of mankind to leave off thinking about a thing
which is no longer doubtful is the cause of half their errors"  John Stuart Mill
Last edited by peacegirl; Yesterday at 08:12 PM.

Yesterday, 08:04 PM


This is the title that appears beneath your name on your posts.


Join Date: Jun 2005
Gender: Male


Re: A revolution in thought
Again, no cut and paste.
Take one of those experiments, there's a number called the pvalue, which tells you what the significance of those results is. Look at that number and tell me what the probability is that you would get the result by chance assuming their hypothesis was false.
How hard is that?
Last edited by But; Yesterday at 09:08 PM.

Yesterday, 08:16 PM


Member


Join Date: Mar 2011
Location: U.S.A.
Gender: Female


Re: A revolution in thought
Quote:
Originally Posted by But
Again, no cut and paste.
Take one of those experiments, there's a number called the pvalue, which tells you what the significance of those results is. Look at that number and tell me what the probability is that you would get the result by chance assuming their hypothesis was false.
How hard is that?

You find the pvalue. Then report back.
__________________
"We will not solve the problems of the world from the level of thinking we were at when we created them"  Einstein
"The fatal tendency of mankind to leave off thinking about a thing
which is no longer doubtful is the cause of half their errors"  John Stuart Mill
Last edited by peacegirl; Yesterday at 08:31 PM.

Yesterday, 08:30 PM


Member


Join Date: Mar 2011
Location: U.S.A.
Gender: Female


Re: A revolution in thought
You can hardly find any videos on facial recognition in dogs. This one is funny. They determined a dog recognized his owner by how long his eyes were fixed on the picture. So what if the pvalue shows it was statistically significant and therefore not by chance? That's what you are counting on, right? Do you actually think this experiment proves the dog recognized his owner?
__________________
"We will not solve the problems of the world from the level of thinking we were at when we created them"  Einstein
"The fatal tendency of mankind to leave off thinking about a thing
which is no longer doubtful is the cause of half their errors"  John Stuart Mill

Yesterday, 08:47 PM


This is the title that appears beneath your name on your posts.


Join Date: Jun 2005
Gender: Male


Re: A revolution in thought
Quote:
Originally Posted by peacegirl
You find the pvalue. Then report back.

Yeah, you can't do even that. No surprise there.

Yesterday, 09:07 PM


This is the title that appears beneath your name on your posts.


Join Date: Jun 2005
Gender: Male


Re: A revolution in thought
Let's say the pvalue is 0.02. Tell me what the probability is that you would get the result by chance assuming their hypothesis was false.

Yesterday, 09:10 PM


Mayor of Mayonnaise




Re: A revolution in thought
Quote:
Originally Posted by But
Quote:
Originally Posted by peacegirl
You find the pvalue. Then report back.

Yeah, you can't do even that. No surprise there.

Dear, she can't tell you anything about the pvalue.
She can tell you the pi value, though. It's four.

Today, 03:35 PM


Member


Join Date: Mar 2011
Location: U.S.A.
Gender: Female


Re: A revolution in thought
Quote:
Originally Posted by But
Quote:
Originally Posted by peacegirl
You find the pvalue. Then report back.

Yeah, you can't do even that. No surprise there.

I don't need to be quizzed by you to know that there is something fishy about the Gold Standard of Pvalue. All you are doing is trying to justify your faith in the results of of a questionable test that tries to make it more valid than it actually is. STOP THE BS!
The only way to actually find out if the effect is real or a fluke is to do more experiments. If they all produce results that would be unlikely if there was no real effect, then you can say the results are probably real. The pvalue alone can only be a reason to check again — not statistical congratulations on a job well done. And yet that’s exactly how most researchers use it. And most science journalists.13
Statistical Significance Abuse
__________________
"We will not solve the problems of the world from the level of thinking we were at when we created them"  Einstein
"The fatal tendency of mankind to leave off thinking about a thing
which is no longer doubtful is the cause of half their errors"  John Stuart Mill

Today, 03:58 PM


This is the title that appears beneath your name on your posts.


Join Date: Jun 2005
Gender: Male


Re: A revolution in thought
Quote:
Originally Posted by peacegirl
I don't need to be quizzed by you to know that there is something fishy about the Gold Standard of Pvalue.

Do you really think you're qualified to make pronouncements like this, given that you can't answer the simplest question on the topic? For a selfdescribed good researcher as yourself, that should be no problem, right? Do you actually understand a single word of that stuff you copypasted?
Let's say the pvalue is 0.02. Tell me what the probability is that you would get the result by chance assuming the null hypothesis.

Today, 04:20 PM


Member


Join Date: Mar 2011
Location: U.S.A.
Gender: Female


Re: A revolution in thought
Quote:
Originally Posted by But
Quote:
Originally Posted by peacegirl
I don't need to be quizzed by you to know that there is something fishy about the Gold Standard of Pvalue.

Do you really think you're qualified to make pronouncements like this, given that you can't answer the simplest question on the topic? For a selfdescribed good researcher as yourself, that should be no problem, right? Do you actually understand a single word of that stuff you copypasted?
Let's say the pvalue is 0.02. Tell me what the probability is that you would get the result by chance assuming the null hypothesis.

It's misleading, that's why I won't consider it. You can try, as usual, to discredit my abilities, but the case with dogs using Pvalue is bullshit and you're using it to give it more credence than it deserves.
Significance Problem #1
Two flavours of “significant”: statistical versus clinical
Research can be statistically significant, but otherwise unimportant. Statistical significance means that data signifies something… not that it actually matters.
Statistical significance on its own is the sound of one hand clapping. But researchers often focus on the the positive: “Hey, we’ve got statistical significance! Maybe!” So they summarize their findings as “significant” without telling us the size of the effect they observed, which is a little devious or sloppy. Almost everyone is fooled by this — except 98% of statisticians — because the word “significant” carries so much weight. It really sounds like a big deal, like good news.
But it’s like bragging about winning a lottery without mentioning that you only won $25.3
Statistical significance without other information really doesn’t mean all that much. It is not only possible but common to have clinically trivial results that are nonetheless statistically significant. How much is that statistical significance is worth? It depends … on details that are routinely omitted.
Which is convenient if you’re pushing a pet theory, isn’t it?
Imagine a study of a treatment for pain, which has a statistically significant effect, but it’s a tiny effect: that is, it only reduces pain slightly. You can take that result to the bank (supposedly) — it’s real! It’s statistically significant! But … no more so than a series of coin flips that yields enough heads in a row to raise your eyebrows. And the effect was still tiny. So calling these results “significant” is using math to put lipstick on a pig.
There are a lot of decorated pigs in research: “significant” results that are possibly not even that, and clinically boring in any case.
Just because a published paper presents a statistically significant result does not mean it necessarily has a biologically meaningful effect.
Science Left Behind: FeelGood Fallacies and the Rise of the AntiScientific Left, Alex Berezow & Hank Campbell
If you torture data for long enough, it will confess to anything.
Ronald Harry Coase
Pvalues, where P stands for “please stop the madness”
Small study proves showers work Too often people smugly dismiss a study just because of small sample size, ignoring all other considerations, like effect size … a rookie move. For instance, you really do not need to test lots of showers to prove that they are an effective moistening procedure. The power of a study is a product of both sample and effect size (and more).
Statistical significance is boiled down to one convenient number: the infamous, cryptic, bizarro and highly overrated Pvalue. Cue Darth vader theme. This number is “diabolically difficult” to understand and explain, and so pvalue illiteracy and bloopers are epidemic (Goodman identifies ““A dirty dozen: twelve pvalue misconceptions””4). It seems to be hated by almost everyone who actually understands it, because almost no one else does. Many believe it to be a blight on modern science.5 Including the American Statistical Association6 — and if they don’t like it, should you?
The mathematical soul of the pvalue is, frankly, not really worth knowing. It’s just not that fantastic an idea. The importance of scientific research results cannot be jammed into a single number (and nor was that ever the intent). And so really wrapping your head around it no more important than learning the gritty details of the Rotten Tomatoes algorithm when you’re trying to decide whether to see that new Godzilla (2014) movie.7
cont. at: Statistical Significance Abuse
__________________
"We will not solve the problems of the world from the level of thinking we were at when we created them"  Einstein
"The fatal tendency of mankind to leave off thinking about a thing
which is no longer doubtful is the cause of half their errors"  John Stuart Mill

Today, 04:34 PM


This is the title that appears beneath your name on your posts.


Join Date: Jun 2005
Gender: Male


Re: A revolution in thought
Quote:
Originally Posted by peacegirl
Quote:
Originally Posted by But
Quote:
Originally Posted by peacegirl
I don't need to be quizzed by you to know that there is something fishy about the Gold Standard of Pvalue.

Do you really think you're qualified to make pronouncements like this, given that you can't answer the simplest question on the topic? For a selfdescribed good researcher as yourself, that should be no problem, right? Do you actually understand a single word of that stuff you copypasted?
Let's say the pvalue is 0.02. Tell me what the probability is that you would get the result by chance assuming the null hypothesis.

It's misleading, that's why I won't consider it.

You're not fooling anyone. It's obvious to everyone reading this that not only do you have no idea what you're talking about, you are incapable of finding out the answer, even with the whole internet at your disposal.
Smart people can play dumb, but it doesn't work the other way around, unfortunately.
Prove me wrong. Let's say the pvalue is 0.02. Tell me what the probability is that you would get the result by chance assuming the null hypothesis.

Currently Active Users Viewing This Thread: 13 (0 members and 13 guests)


Thread Tools 

Display Modes 
Linear Mode

Posting Rules

You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
HTML code is Off



All times are GMT +1. The time now is 06:30 PM.



