Nearly a decade later, it turns out that a scientific study designed to help reduce fraud and extortion was fabricated. World-famous economic fraud is accused by Dan Ariely – author of The Honest Truth About Lying.

Dan Ariely, a professor at Duke University, is a famous behavioral economist. It deals with psychological mechanisms in economics. His innovative research earned him the title of professor, and famous scientific books – worldwide recognition. He has over 470 scientific papers to his credit, and in 2018 he was recognized as one of the 50 most influential living psychologists.

In 2012, he and Max Bazerman, a professor of management at Harvard Business School, published the results of experiments that proved that you can reduce the number of scams and frauds with a simple trick. The researchers showed that including the phrase “I find the information provided to be correct” at the beginning of the form reduces the number of times people provide inaccurate information.

It is customary to include such declarations at the end of documents. Influenced by Ariely’s work, they were initially positioned by insurance companies and other institutions. US tax offices have been doing this since 2016. The researchers’ groundbreaking work has been cited more than 400 times in other scientific publications.

Research research, i.e. quality control in science

Years later, the same scientists found that subsequent experiments did not confirm the effect they had previously observed. There is nothing wrong with that yet. Sometimes the result observed in the research is a coincidence, sometimes flaws of the research method, less often errors in the calculations. Scientific papers – at least reliable ones – for this reason accurately describe the experiments that were carried out and the statistical methods used, so that other groups of researchers can replicate them and confirm the results.

However, this piqued the interest of Leif Nelson and Joseph Simmons, who are also behavioral psychologists. Scientists have subjected the original work from 2012 to a detailed study. “Without a doubt” one of the experiments described in it was fabricated, They write on the “Data Colada” blog..

See also  Coronavirus infection rates, cases and deaths across Wales on Sunday, November 1

The data in Ariely’s work came from an insurance company and relates to mileage. Common sense suggests that this variable should have a normal distribution, i.e. there should be few outliers, and average values ​​often appear. However, the figures cited in the study indicate that insurance companies’ clients cover all distances roughly equally – the curve is nearly flat. Someone made up the data.

The authors of the original publication agreed with “research researchers” Neloson and Simmons. They asked the journal in which it was published (“Proceedings of the National Academy of Sciences” PNAS) to withdraw it. It’s kind of an official declaration that the study was flawed, something in science, though its use is rarely used.

It is not known how the fraudulent results ended up in action

However, it is not clear how the forged data ended up at work, i.e. who posted it there. There were five authors, and four said they weren’t. Ariely himself explains that the study was based on information provided by the insurance company, which used in its documents the method described by scientists to reduce fraud. Ariely stated that he had no idea the data was fabricated, and if he did, he would never use it for research work.

What company did he work with at that time? The scientist hides behind confidentiality agreements and does not reveal his name. He claims that all the people he had contact with at that company no longer work for it. Neither of them remember what happened nearly ten years ago.

Ariely faces allegations of misconduct again

This isn’t the first time Ariely has dealt with allegations of misconduct. In 2008, he published the results of an experiment that an independent team was unable to recreate. Last month, editors had to add a footnote to his old post, because statistical irregularities were found in the study (this time from 2004). Ariely stated that there was no longer any source of original data for him to provide.

See also  Will desks still be useful? Talking to Gabriela Kahl | Internal public relations

The fact that the economist is the author of a famous scientific book called “The Truth About Dishonesty. How We Lie to Everyone and Especially Ourselves” (published in Poland by Smak Tylko Publishing in 2017), which was on the New York Times bestseller list.

Dan Ariely plays down the allegations. He claims that science is a process of self-actualization. In the meantime, it became wider and wider. It overshadows not only Ariely’s reputation, but also the entire system of behavioral economics. And in a broader context, even the public perception of scientists and science in general.

The results of most work cannot be replicated

In 2015, Science Weekly published a research paper that showed that Most scientific experiments in the field of psychology have not been replicated. University of Virginia researchers led by Brian Nozick replicated 100 studies whose results were published in various psychological journals. They tried to make their experiences as close as possible to the original ones. In 60 cases, they did not get the same result.

John Ioannidis of Stanford University, a “researcher” known for research shortcomings, estimates that the proportion of poor research in medicine is comparable. About half of all published papers contain exaggerated or outright wrong conclusions. He also argues that the number of worthless scientific publications may be greater in fields other than psychology and medicine.

Ioannidis drew attention to this already in 2005 in a work published in “PLOS ONE”. In it, he argued that the probability (statistically speaking) that the results of scientific research would be wrong than they would be correct.

Do most researchers lie? of course not

Does this mean that most scientists lie? No, the problem is the stats.

It is accepted in science that a test result is “statistically significant” if the probability that it is a coincidental result is at most 5%. In science, this is called a “P-value” and is expressed as a decimal number. Research that does not meet this requirement (ie whose P-value is greater than 0.05) is often not published at all. Research that satisfies this condition is often considered good and published.

See also  What are the new Covid rules and Level 4 restrictions?

However, even a P value of 0.05 means that there is still 5 percent. The probability – that is, a 1 in 20 chance that the test result will be an absolute coincidence. On the other hand, if 20 false hypotheses are tested, assuming such a value, it will be shown statistically that one of them is true.

The P value only determines the probability that the result obtained is the result of chance. There is absolutely nothing to be said about whether the result obtained in the test is correct or not.

Repeat and repeat again

The answer to such a question – whether the results of the research are correct or just a coincidence – can be obtained only after many of the same tests have been performed under identical or similar conditions. Therefore, it is very important in science that the experiment is reproducible and that the scientific publications provide all the data to be able to replicate the research described.

An example of the fact that one result does not validate the hypothesis presented in the paper were two notable and controversial studies. The first was about the harm of thiomersal in vaccines (which was supposed to cause autism), and the second about the harm of genetically modified corn (which was supposed to cause tumors in mice). No other similar studies have produced the same or even similar results.

In that sense, Dan Ariely is right, science is improving itself. However, the errors are different and the data is made to fit the hypothesis presented in the paper. “Finding evidence of fraud in the work of such an influential scientist would be stark, especially for young researchers who want to explore this field,” says Eugene Deman of the University of Pennsylvania.

Source: Science