Portuguese English Spanish
Interface
Adjust the interface to make it easier to use for different conditions.
This renders the document in high contrast mode.
This renders the document as white on black
This can help those with trouble processing rapid screen movements.
This loads a font easier to read for people with dyslexia.
Advanced search
You are here: News » An algorithm for scientific publication?

An algorithm for scientific publishing?

In a comment, the editor of the magazine 'Memories of the Oswaldo Cruz Institute' puts in check what is legitimate and acceptable in the scope of self-regulation of science
By Journalism IOC12/06/2017 - Updated on 18/06/2021

Just as it has been the target of debates in the field of advertising and publicity, self-regulation is a central theme of contemporary discussions on scientific publication. Can the scientist's diligence, writing about his data, be as accurate as an algorithm? Editor of the journal 'Memoirs of the Instituto Oswaldo Cruz', researcher Adeilton Brandão discusses the topic in a comment published on the journal's website. Published by the Oswaldo Cruz Institute (IOC/Fiocruz) since 1909, the journal occupies a prominent position in Latin America and has, as a fundamental commitment, to offer researchers and readers free of charge an efficient conduct in line with the best practices for publishing results of scientific research. Follow the text below: 

The self-regulation of science: what is legitimate and acceptable
Adeilton Brandão, editor of the scientific journal 'Memoirs of the Oswaldo Cruz Institute'

In an ideal world, publishing the results of scientific research can be compared to an algorithm with the following steps:

A) Obtain all the necessary resources to complete, in a reasonable period of time, a project designed to test a scientific hypothesis;
B) write a concise and objective text describing the hypothesis tested, the methods, the data collection, the analysis adopted to reject or support the hypothesis and the conclusions;
C) is aware of his/her responsibility as a scientist and strictly complies with ethical recommendations and good practices for scientific publication;
D) decide if you are going to communicate the results of the research through an event bringing together researchers from your area of ​​expertise, books or specialized magazines;
E) choose an appropriate scientific meeting (congress, symposium, conference), a book publisher or a specialized magazine (in the case of articles);
F) regardless of the medium chosen, carefully delimit the community of researchers to whom you think that the results of science can generate interest and impact (your results will allow the advancement of the work of other researchers);
G) wait for comments, critical analyzes and reports from these researchers about the attempts, successful or not, to reproduce or expand the scientific results you have published;
H) if necessary, provide, rigorously and kindly, responses to your colleagues' comments, criticisms and reports about unsuccessful reproducibility attempts [do not get angry or despair at actions and demands regarding step 'H'];
J) continue to reflect and identify new scientific problems and try to solve them through new projects for testing hypotheses;
K) return to step 'A'.

The above algorithm is expected to be rigorously executed by a diligent scientist. However, the real world is imperfect, and, in this world, one fact in particular can sometimes cause a deviation in these stages: the intense competition for resources and for recognition among scientists. In itself, competition is not a problem: it can contribute to resolving the “scarce resources versus growing demand” conflict. The economic thought initiated by Adam Smith indicates that competition leads to the efficient use of limited resources and, if properly regulated, brings benefits to society (on the subject, check out Stiglitz's text.

But what does this have to do with scientific publishing in the contemporary research environment? After all, there is self-regulation in science and its associated processes – such as, for example, editorial activity, research communication and association between scholars. Apparently, there would be no need for an external agent to oversee the process. Scientists generate new facts and themselves decide which of these facts are worth publishing – most editors of scientific journals, as well as reviewers of scientific articles (called peer reviewers), are working scientists. What's wrong with it? Well, from the point of view of formal (or official) regulation of competition between peers in science, an external agent, free of any conflict of interest, has not yet been established to take care of what we could define as "the virtuous code of scientific practice". The emergence of “Research Integrity Committees” in some countries (for example, the Office of Research Integrity of the US National Institutes of Health) can be considered the beginning of such regulation. However, science is a borderless activity, which means that global regulation is not an easy thing to implement.

The essential question we can ask now is: is there a need for this regulatory agent? Some behind-the-scenes facts from the scientific world point to a positive answer. In fact, the dissemination of these facts in the research environment led to the creation of research integrity committees, namely:

1) plagiarism and lack of credit for the work of other scientists;
2) fraudulent articles presented as original research work;
3) theft of data / projects / ideas / from a lab colleague or collaborators;
4) use of the anonymous peer review mechanism in an attempt to prevent competitors from disclosing their research work;
5) lack of recognition for those who supported the research;
6) expansion of the scope of the research work, for example, claiming to have solved a problem that is beyond the capacity of the methodology used or not contemplated by the experimental project;
7) in the case of publishers, important agents in the scientific world: launch scientific journals that have the sole purpose of easily generating money, without commitment to editorial ethics, scientific rigor or professionalism;
8) submission of manuscripts to journals recognized as 'slot machines', 'predatory journals' or pseudo-journals;
9) overestimating the 'publication scoring equation': successful researcher equals X articles published in T years with Y impact factor and Z citations;
10) Fractionation of research results into many 'new scientific articles'. 

As some of these practices are accepted and even encouraged in certain countries – for example, for a scientist to be successful in Brazilian universities and research centers, the publication of many articles in short periods of time is an indispensable condition – any committee or body relating to research integrity can only partially deal with such issues. This puts the problem back on the scientists themselves: they must come to a consensus about what is legitimate and acceptable within the framework of scientific practice.

Most of the practices listed above are clearly unacceptable, as noted in items 1 to 8, for example. However, for the last two, it is not uncommon to find someone who defends them. Without undermining the power of competition in promoting the efficient use of scarce and limited resources, scientists and their organizations need to convey a very clear message to the world: even if some practices seem legitimate to some members of the scientific community, if these same practices reduce the effects of the 'common good' policy, that is, actions that will result in the benefit of society, they cannot be accepted as good practices.

To see the original version of the commentary in English, click here.

In a comment, the editor of the magazine 'Memories of the Oswaldo Cruz Institute' puts in check what is legitimate and acceptable in the scope of self-regulation of science
By: 
journalism

Just as it has been the target of debates in the field of advertising and publicity, self-regulation is a central theme of contemporary discussions on scientific publication. Can the scientist's diligence, writing about his data, be as accurate as an algorithm? Editor of the journal 'Memoirs of the Instituto Oswaldo Cruz', researcher Adeilton Brandão discusses the topic in a comment published on the journal's website. Published by the Oswaldo Cruz Institute (IOC/Fiocruz) since 1909, the journal occupies a prominent position in Latin America and has, as a fundamental commitment, to offer researchers and readers free of charge an efficient conduct in line with the best practices for publishing results of scientific research. Follow the text below: 

The self-regulation of science: what is legitimate and acceptable
Adeilton Brandão, editor of the scientific journal 'Memoirs of the Oswaldo Cruz Institute'

In an ideal world, publishing the results of scientific research can be compared to an algorithm with the following steps:

A) Obtain all the necessary resources to complete, in a reasonable period of time, a project designed to test a scientific hypothesis;
B) write a concise and objective text describing the hypothesis tested, the methods, the data collection, the analysis adopted to reject or support the hypothesis and the conclusions;
C) is aware of his/her responsibility as a scientist and strictly complies with ethical recommendations and good practices for scientific publication;
D) decide if you are going to communicate the results of the research through an event bringing together researchers from your area of ​​expertise, books or specialized magazines;
E) choose an appropriate scientific meeting (congress, symposium, conference), a book publisher or a specialized magazine (in the case of articles);
F) regardless of the medium chosen, carefully delimit the community of researchers to whom you think that the results of science can generate interest and impact (your results will allow the advancement of the work of other researchers);
G) wait for comments, critical analyzes and reports from these researchers about the attempts, successful or not, to reproduce or expand the scientific results you have published;
H) if necessary, provide, rigorously and kindly, responses to your colleagues' comments, criticisms and reports about unsuccessful reproducibility attempts [do not get angry or despair at actions and demands regarding step 'H'];
J) continue to reflect and identify new scientific problems and try to solve them through new projects for testing hypotheses;
K) return to step 'A'.

The above algorithm is expected to be rigorously executed by a diligent scientist. However, the real world is imperfect, and, in this world, one fact in particular can sometimes cause a deviation in these stages: the intense competition for resources and for recognition among scientists. In itself, competition is not a problem: it can contribute to resolving the “scarce resources versus growing demand” conflict. The economic thought initiated by Adam Smith indicates that competition leads to the efficient use of limited resources and, if properly regulated, brings benefits to society (on the subject, check out Stiglitz's text.

But what does this have to do with scientific publishing in the contemporary research environment? After all, there is self-regulation in science and its associated processes – such as, for example, editorial activity, research communication and association between scholars. Apparently, there would be no need for an external agent to oversee the process. Scientists generate new facts and themselves decide which of these facts are worth publishing – most editors of scientific journals, as well as reviewers of scientific articles (called peer reviewers), are working scientists. What's wrong with it? Well, from the point of view of formal (or official) regulation of competition between peers in science, an external agent, free of any conflict of interest, has not yet been established to take care of what we could define as "the virtuous code of scientific practice". The emergence of “Research Integrity Committees” in some countries (for example, the Office of Research Integrity of the US National Institutes of Health) can be considered the beginning of such regulation. However, science is a borderless activity, which means that global regulation is not an easy thing to implement.

The essential question we can ask now is: is there a need for this regulatory agent? Some behind-the-scenes facts from the scientific world point to a positive answer. In fact, the dissemination of these facts in the research environment led to the creation of research integrity committees, namely:

1) plagiarism and lack of credit for the work of other scientists;
2) fraudulent articles presented as original research work;
3) theft of data / projects / ideas / from a lab colleague or collaborators;
4) use of the anonymous peer review mechanism in an attempt to prevent competitors from disclosing their research work;
5) lack of recognition for those who supported the research;
6) expansion of the scope of the research work, for example, claiming to have solved a problem that is beyond the capacity of the methodology used or not contemplated by the experimental project;
7) in the case of publishers, important agents in the scientific world: launch scientific journals that have the sole purpose of easily generating money, without commitment to editorial ethics, scientific rigor or professionalism;
8) submission of manuscripts to journals recognized as 'slot machines', 'predatory journals' or pseudo-journals;
9) overestimating the 'publication scoring equation': successful researcher equals X articles published in T years with Y impact factor and Z citations;
10) Fractionation of research results into many 'new scientific articles'. 

As some of these practices are accepted and even encouraged in certain countries – for example, for a scientist to be successful in Brazilian universities and research centers, the publication of many articles in short periods of time is an indispensable condition – any committee or body relating to research integrity can only partially deal with such issues. This puts the problem back on the scientists themselves: they must come to a consensus about what is legitimate and acceptable within the framework of scientific practice.

Most of the practices listed above are clearly unacceptable, as noted in items 1 to 8, for example. However, for the last two, it is not uncommon to find someone who defends them. Without undermining the power of competition in promoting the efficient use of scarce and limited resources, scientists and their organizations need to convey a very clear message to the world: even if some practices seem legitimate to some members of the scientific community, if these same practices reduce the effects of the 'common good' policy, that is, actions that will result in the benefit of society, they cannot be accepted as good practices.

To see the original version of the commentary in English, click here.

The non-profit reproduction of the text is allowed as long as the source is cited (Comunicação / Instituto Oswaldo Cruz)