An Objective Quantitative “Quality Factor” for Scientific Meetings, Is It Possible? A New Formula
PDF
Cite
Share
Request
Original Research
VOLUME: ISSUE:
P: -

An Objective Quantitative “Quality Factor” for Scientific Meetings, Is It Possible? A New Formula

1. Hacettepe University Faculty of Medicine, Department of Urology, Ankara, Turkiye
2. Hacettepe University Faculty of Medicine, Department of Urology, Division of Pediatric Urology, Ankara, Turkiye
No information available.
No information available
Received Date: 14.12.2024
Accepted Date: 28.02.2025
Online Date: 18.04.2025
PDF
Cite
Share
Request

Abstract

Objective

Numerous local and international meetings are held in the field of medicine. Up-to-date information and experiences are shared at these meetings. It also provides an opportunity to pave the way for collaborations. There is a need for an objective and reliable tool to evaluate conference quality. In our study, we aimed to develop an objective and understandable quality factor (QF) that evaluates scientific congresses. 

Materials and Methods

Between 2021 and 2022, abstract books of four national meetings of the Society of Urological Surgery in Turkey (MSUST) were reviewed [(2012 (MSUST1), 2014 (MSUST2), 2016 (MSUST3), 2018 (MSUST4)]. A total of 1,436 abstracts were evaluated. The publication status of the abstracts presented at a conference in scientific articles a scientific journal within the first two years was investigated in scientific journals using the Web of Science, PubMed, and Google Scholar databases. The impact factors of the scientific journals in which Abstracts were published and the H Indices of the scientists invited as speakers to the congress were taken from the Web of Science database. The H-index values of the speakers at the time of their participation in the meeting were considered. Considering these three parameters, we created a QF for scientific congresses. QF = [(abstracts publication rate in two years x average impact factor of journals) + average H Index of speakers]/10.

Results

MSUST1, MSUST2, MSUST3, and MSUST4 had a follow-up of 96, 72, 48, and 24 months, respectively. The percentages of abstracts in MSUST1, MSUST2, MSUST3, and MSUST4 were 31.6%, 19.9%, 13.8%, and 14.1%, respectively, with no time limit set for inclusion, and all were published in a scientific journal. Median publication times of Abstracts in MSUST1, MSUST2, MSUST3, and MSUST4 were 23 (-2 to 88), 11 (-2 to -60), 10.5 (-2 to -39), and 7 (-2 to -24) months. The average H-index of the speakers at the UCD4 meeting was 13.6±11.5, the average impact factor of the journals in which abstracts was published was 2.029±0.84, and the rate of publication of abstracts in a 24-month period was 14.1%. With the formula we suggested, the QF of the MSUST4 meeting was calculated as 4.22 [(14.1x2.029)+13.6]/10=4.22.

Conclusion

The QF we recommend is easy to calculate and can be used objectively to evaluate the quality of scientific meetings. However, our primary goal is to draw attention to this direction, instead of developing this formula. We believe this tool will help physicians manage their time, energy, and financial resources.

Keywords:
Scientific congresses, formula, quality, h-index, impact factor

What’s known on the subject? and What does the study add?

Many scientific conferences are organized in the medical field. how can the quality of a conference be measured? In the literature, there is no comprehensive method to evaluate the quality of scientific conferences. We offer a comprehensive approach to this problem. With the formula we have developed, we aim to measure the quality of conferences.

Introduction

An immense number of annual meetings are held in the medical field on local and international scales. These meetings provide an environment for sharing the most up-to-date information and precious experience, as well as creating opportunities for networking that pave the way for future collaboration. From this point of view, international congresses play an important role in education and development of young professionals; therefore, they are promoted by relevant medical societies. However, it is a controversial topic.

Some colleagues argue against certain aspects of international congresses, mentioning their negative impact on carbon emissions, while others defend them (1). Ioannidis J. P. A. argues that conferences have little to do with scientific knowledge dissemination and suggests reevaluating our standpoint on international gatherings (2).

With advances in communication technology as well as an increase in alternatives for conventional meetings, the rationale for large in-person gatherings becomes unclear. Considering recent restrictions imposed on both national and international meetings by local authorities to prevent the spread of Coronavirus disease 2019, attending an international congress is a decision of crucial importance. This development forces us to reconsider our perspective on participation. In this regard, we believe that there is a vital need for an objective and reliable tool to assess congress quality.

The aim of the present paper is to propose an objective criterion to determine the scientific value of any given congress, thus facilitating the decision-making process. We believe that this tool would be of great assistance to physicians in managing their time, energy and financial resources.

Materials and Methods

Abstract books of four national meetings of the Society of Urological Surgery, in Turkey (MSUST), were reviewed [2012 (MSUST1), 2014 (MSUST2), 2016 (MSUST3), 2018 (MSUST4)]. A total of 1,485 abstracts from these 4 meetings held between 2021 and 2022 were reviewed. Poster (visual or oral) presentations were not included in the study. A total of 1,436 abstracts from four meetings were included in the study.

The publication status of these abstracts in a scientific journal within the first two years after the meeting was investigated using the Web of Science, PubMed, and Google Scholar databases. The databases were searched, by the first author of the abstracts. When first author searching was unsuccessful, the search was conducted with subsequent authors. Published papers identical to the abstracts in hypothesis, study design, and conclusion were considered a match and included in the study. Abstracts published more than three months before the congress date were excluded from the study. The publication time was calculated as the time interval between a meeting and the online availability of an abstract. Publication rates in the first two years were calculated. The median publication time was determined for all meetings.

In the fourth MSUST, studies that were published as abstracts and later published in a journal were identified. The impact factors of the journals in which these studies were published were obtained.

The H-indices of the lecturers who attended the fourth MSUST as speakers were taken from Web of Science. The H-index values of the speakers at the time of their participation in the meeting were considered.

Considering these three parameters, we created a scientific congress quality factor. Quality factor (QF) = [(Abstracts publication rate in two years x average impact factor of journals) + average H-index of Speakers]/10.

Statistical Analysis

Data were analyzed using SPSS 23.0 software. Non-parametric data were presented as median (minimum-maximum). Parametric data were presented as mean ± standard deviation. The Kaplan-Meier survival analysis was used for assessing publication times. The distribution of publication times was examined by survival analysis.

Results

Forty-nine of 1,485 abstracts were excluded because they were published more than three months before the meetings. A total of 1,436 abstracts were investigated. The 1st MSUST, 2nd MSUST, 3rd MSUST, and 4th MSUST had a follow-up time of 96, 72, 48, and 24 months, respectively.

The overall publication rates of the 1st MSUST, 2nd MSUST, 3rd MSUST, and 4th MSUST were 31.6%, 19.9%, 13.8%, and 14.1%, respectively (Figure 1). The median publication time of the 1st MSUST, 2nd MSUST, 3rd MSUST, and 4th MSUST was 23 months (-2 - 88), 11 months (-2 - 60), 10.5 months (-2 - 39), and 7 months (-2 - 24), respectively. Using survival analysis of the abstracts published, 24-month publication rates of 1st MSUST, 2nd MSUST, 3rd MSUST, and 4th MSUST were 59.3%, 81%, 86%, and 100%, respectively (Figure 2). The publication curves for all times are provided in Figure 3. Accordingly, we used a publication time interval of two years. The publication rates for the first two years of the 1st MSUST, 2nd MSUST, 3rd MSUST, and 4th MSUST were 18.8%, 16.3%, 12%, and 14.1%, respectively (Figure 4).

At the fourth meeting of MSUST, the average H-index of lecturers participating as speakers was 13.6±11.5, and the average impact factor of journals of published abstracts was 2.029±0.840.

In summary, at MSUST’s fourth meeting, the rate of publication in a journal in the first two years, the mean impact factor of journals, and the average H-index of speakers were 14.1%, 2.029, and 13.6, respectively.

In summary, with the formula we suggested, the QF of the MSUST4 meeting was calculated as 4.22 [(14.1x2.029)+13.6]/10=4.22. We tried to simplify the result by taking 10% of the obtained value, therefore, we divided the output by 10.

Discussion

Every year, international or national scientific meetings are held in several scientific fields around the world. Scientific meetings are especially important for young scientists to follow and discuss the developments in their fields, to present their own research, and to reveal new ideas. However, there is no objective, widely used assessment system that compares congresses or measures the quality index of a congress. The issue that has been popular especially in the last two decades is the publication rates of congresses. These rates in almost all fields have been evaluated (3). These papers explore the factors associated with the publication of abstracts accepted to the congress and provide publication rates of scientific congresses. In fact, the publication rate is an important matter of prestige for congresses. Especially in urology, prestigious associations such as the American Urological Association, European Association of Urology, Société Internationale d’Urologie, and European Society for Pediatric Urology also reported the publication rates of their own congresses (4-7). However, examination of these publications shows differences in the databases used to review the publications, the methods of matching publications and abstracts, and, more importantly, the follow-up times of studies. Therefore, although the publication rates provide information about congresses, it is very difficult to make a comparison due to the aforementioned reasons. Thus, these parameters need to be standardized. However, it is believed that the publication rate alone would be insufficient to compare the congresses in terms of scientific value.

The important factor to be considered while examining the publication rates is follow-up. There are studies conducted with a long follow-up of up to 120 months (8-10). Scherer et al. (3) examined 181 studies in their systematic review with follow-up time using survival analysis and showed an increase in the publication rate over time. The authors also established a publication rate of 68.7% and 44.9% for randomized-controlled studies and other types of studies, respectively, at a 10-year follow-up. Examination of their survival graphs shows that more than half of the studies of all study designs were published within two years. In our study, we observed that more than half of all publications, including the 1st MSUST, which had the longest follow-up (96 months), were within the first two years of their follow-up periods. Although the publication rate in the first two years does not include the overall publication rate of congresses, we believe that it can be used as a better indicator.

Today, bibliometric indicators are used by researchers and journals. Although there are some controversial aspects, the use of these bibliometric measurements is accepted by the scientific community. The H-index was proposed to assess the scientific output of an individual researcher (11). Although Hirsch first defined the H-index for the field of physics, it was later applied to almost all fields over time. The H-index was defined by Hirsch as the number of papers with a citation number of ≥ h. For example, if a researcher’s H-index is 5, it means that the researcher has 5 publications that have been cited at least 5 times. However, the H-index also has limitations. The most important limitation is that the researcher must engage in scientific research for a certain length of time. Therefore, its correlation with age is not surprising. Another limitation of the H-index is self-citation, and friendly cross-citations, thus, the H-index of researchers may increase quickly (12, 13). Another issue is the contribution of the authors to the study. Since citations received by a study affect all authors equally, the first and last authors will be evaluated in the same way as other authors (13). The H-index should vary by field. This is because it would not be fair to compare researchers working in two different fields, where the total numbers of citations and articles during given periods are very different. The H-index is a currently applicable bibliometric indicator despite its obvious limitations.

The bibliometric indicator commonly used for journals is the impact factor. The impact factor is calculated by dividing the number of citations received by a journal in the two years, by the total number of publications in that time period (14). However, a high impact factor of the journal does not indicate the high quality of every article. Since the impact factor is calculated over the total number of citations, it may not accurately represent the citation impact of all articles published in the journal. Furthermore, the I impact factor is affected by factors such as the journal’s subject category, specialty, type of publication, and number of publications (15).

What is known on the subject is that the process that started with examining the publication rates of the meetings has not yet developed to a point of comparing the meetings. In their study published in 2018, De Simone et al. (16) argued that a congress impact factor (IFc) should be assigned to congresses. The authors believe that IFc, which is derived by dividing the mean H-index of lecturers by the number of lectures on the topic at the congress, with normalization for lecture topic, is an important indicator for congresses. We believe that congresses are not just about the presentations of invited lecturers. Additionally, the calculation method is difficult for large and heterogeneous congresses. Although it is a method that can be used for standardization, we believe that this parameter is insufficient to calculate the quantitative index of a congress. This is because in our opinion, a congress is not just a meeting where lecturers make presentations. Presenting and discussing abstracts of new studies is also an important component of congresses. The study adds that we proposed a more inclusive formula. In this formula, we included the H-index of the lecturers, the publication rates of the accepted abstracts, and the impact factor of the journals publishing the abstracts. This system can be easily calculated by multiplying the publication rate and the impact factor of the journals, and then adding the mean H-index of the lecturers. We suggest that ten percent of the output value should be taken for simplification. It will not be easy to find every parameter of this assessment system, but meeting organizers can request the H-index of the lecturers invited to the meeting, to collect such data. Publication rates and impact factors of journals, may seem more difficult to obtain than the previous parameter. We believe that a single database should be used for this purpose, and only the publications indexed in the database used should be included in the calculation, because the use of multiple databases would change the publication rates and would also complicate the calculation. Perhaps introducing a common questionnaire about the course of the author’s previous submission as part of the abstract submission page may successfully solve this issue. Perhaps it can be followed by a fixed serial number assigned to all abstracts across congresses.

Study Limitations

The most important limitation of this study is the lack of validity of the formula. For the result obtained from the calculation to be considered good or bad, it would be appropriate to compare it with other scientific methodologies. Similarly, evaluating the meetings through participants’ feedback or survey forms could be useful for ensuring the accuracy of the results. The effectiveness of the QF formula could be enhanced by including variables such as citations received by the papers. The lack of validation of the calculated QF is one of the limitations of the study. However, our primary goal is to introduce the QF and highlight this approach. Another important limitation of the present study is the lack of information on non-published abstracts. The purpose of examining meetings organized by a single society is accurately calculating the authors’ H-index and easily accessing information on past congresses. The calculation of QF based on the H index and impact factor also cause certain insufficiencies, leading to limitations as mentioned above. Therefore, we suggest that our QF index should be used for each discipline. For example, comparing the QF of an oncology meeting with the QF of a urology congress may yield inaccurate results depending on the parameters. However, meetings within each scientific discipline can be compared using the proposed formula. The last point of our proposed QF is the requirement for monitoring for a period of at least 2 years after the meeting.

Conclusion

In our world where science is universal, several disciplines organize congresses periodically. We believe that the scientific quality index of these congresses would be a guide for both the prestige of the congress and participants. The QF we recommend is easy to calculate and can be used objectively to evaluate the quality of scientific meetings. However, our primary goal is to draw attention to this direction instead of developing this formula. There is a need for a standard calculation tool that shows the quality of congresses. We believe this tool will help physicians manage their time, energy and financial resources. The formula and its components can be improved.

Ethics

Ethics Committee Approval: Not necessary.
Informed Consent: Not necessary.

Authorship Contributions

Surgical and Medical Practices: M.Al., Ö.F.B., A.K., P.S., A.C.B., S.T., Concept: M.Al., M.A., K.E.B., P.S., H.S.D., Design: M.Al., M.A., Ö.F.B., A.K., K.E.B., P.S., A.C.B., S.T., Data Collection or Processing: M.Al., M.A., A.K., A.C.B., H.S.D., Analysis or Interpretation: M.Al., M.A., K.E.B., H.S.D., S.T., Literature Search: M.Al., M.A., Ö.F.B., A.K., P.S., A.C.B., H.S.D., S.T., Writing: M.Al., M.A., Ö.F.B., A.K., K.E.B., P.S., A.C.B., H.S.D., S.T.
Conflict of Interest: No conflict of interest was declared by the authors.
Financial Disclosure: The authors declared that this study received no financial support.

References

1
Green M. Are international medical conferences an outdated luxury the planet can’t afford? Yes. BMJ. 2008;336:1466.
2
Ioannidis JP. Are medical conferences useful? And for whom? JAMA. 2012;307:1257-1258.
3
Scherer RW, Meerpohl JJ, Pfeifer N, Schmucker C, Schwarzer G, von Elm E. Full publication of results initially presented in abstracts. Cochrane Database Syst Rev. 2018;11:MR000005.
4
Autorino R, Quarto G, Di Lorenzo G, De Sio M, Damiano R. Are abstracts presented at the EAU meeting followed by publication in peer-reviewed journals? A critical analysis. Eur Urol. 2007;51:833-840.
5
Autorino R, Quarto G, Di Lorenzo G, Giugliano F, Quattrone C, Neri F, De Domenico R, Sorrentino D, Mordente S, Damiano R, De Sio M. What happens to the abstracts presented at the Societè Internationale d’Urologie meeting? Urology. 2008;71:367-371.
6
Castagnetti M, Subramaniam R, El-Ghoneimi A. Abstracts presented at the European Society for Pediatric Urology (ESPU) meetings (2003-2010): characteristics and outcome. J Pediatr Urol. 2014;10:355-360.
7
Hoag CC, Elterman DS, Macneily AE. Abstracts presented at the American Urological Association Annual meeting: determinants of subsequent peer reviewed publication. J Urol. 2006;176:2624-2629.
8
Brazzelli M, Lewis SC, Deeks JJ, Sandercock PA. No evidence of bias in the process of publication of diagnostic accuracy studies in stroke submitted as abstracts. J Clin Epidemiol. 2009;62:425-430.
9
Collet AM, Jara-Tracchia L, Palacios SB, Itoiz ME. Dental research productivity in Argentina (1993 to 2003). Acta Odontol Latinoam. 2006;19:81-84.
10
Krzyzanowska MK, Pintilie M, Tannock IF. Factors associated with failure to publish large randomized trials presented at an oncology meeting. JAMA. 2003;290:495-501.
11
Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A. 2005;102:16569-16572.
12
Masic I, Begic E. Scientometric dilemma: is H-index adequate for scientific validity of academic’s work? Acta Inform Med. 2016;24:228-232.
13
Kreiner G. The slavery of the h-index-measuring the unmeasurable. Front Hum Neurosci. 2016;10:556.
14
Garfield E. Journal impact factor: a brief review. CMAJ. 1999;161:979-980.
15
Joshi MA. Bibliometric indicators for evaluating the quality of scientifc publications. J Contemp Dent Pract. 2014;15:258-262.
16
De Simone B, Ansaloni L, Kelly MD, Coccolini F, Sartelli M, Di Saverio S, Pisano M, Cervellin G, Baiocchi G, Catena F. The congress impact factor: a proposal from board members of the World Society of Emergency Surgeons.it (WSES) and Academy of Emergency Medicine and Care (AcEMC). F1000Res. 2018;7:1185.