Reviewer of the Month (2024)

Posted On 2024-05-06 15:18:48

In 2024, mHealth reviewers continue to make outstanding contributions to the peer review process. They demonstrated professional effort and enthusiasm in their reviews and provided comments that genuinely help the authors to enhance their work.

Hereby, we would like to highlight some of our outstanding reviewers, with a brief interview of their thoughts and insights as a reviewer. Allow us to express our heartfelt gratitude for their tremendous effort and valuable contributions to the scientific process.

February, 2024
Elizabeth M. Heitkemper, University of Texas, USA

June, 2024
Stephanie Zawada, Mayo Clinic, USA

July, 2024
Corinne N Kacmarek, University of Maryland, USA

August, 2024
Yunxi Zhang, University of Mississippi Medical Center, USA

Septembers, 2024
Andrea Vitali, University of Bergamo, Italy


February, 2024

Elizabeth M. Heitkemper

Elizabeth Heitkemper, PhD, RN, is an Assistant Professor at the University of Texas at Austin, School of Nursing. Through her training, she has cultivated expertise in the following areas: self-management science, working with a variety of medically underserved (e.g., rural, racial/ethnic minorities, and/or income constrained) groups, biomedical informatics, community engagement, and public health nursing. Her research focuses on improving the health outcomes of medically underserved adults through innovative, user-driven technology interventions to facilitate data-driven decision making. Dr. Heitkemper is currently working on numerous research projects that include the development of a precision diabetes self-management intervention that targets problem-solving using continuous glucose data, the creation and refinement of a data visualization dashboard of social determinants of health data to allow for the identification of health disparities in rural communities by public health professionals, and co-creating a research agenda through sustained community engagement to improve the health of people experiencing homelessness. Connect with her on LinkedIn.

mHealth: What are the qualities a reviewer should possess?

Dr. Heitkemper: I think the most important quality a review needs to possess is dedication—to both the specific manuscript review being completed and to science more broadly since peer review plays such a critical role in the scientific process. Other qualities include knowledge and humility such that when someone’s knowledge isn’t relevant, they have the humility to identify it. Finally, I think that timeliness is critical as we’ve all been on the other side of waiting for reviewer comments and having it take longer than expected.

mHealth: What reviewers have to bear in mind while reviewing papers?

Dr. Heitkemper: Reviewers need to consider the methods and larger implications of the work they are reviewing. Both aspects need to be clear and relevant to mHealth to merit publication. Often the methods of a study will initially seem to be rigorous but upon closer inspection and deeper thought issues will become apparent. This is why I always start by reading the manuscript the day I agree to review it, so that I have ample time to let the methods and the way the study was conceptualized marinate. I usually find, in the second or third time I read a manuscript, that I am able to fully identify methodological issues or innovations. It was not until I started engaging with the articles I reviewed multiple times that I felt like I was truly giving the work the attention it deserved. For implications, I often see discussion sections that fail at placing what the study found within the larger context of the literature and clearly articulating their unique study contributions appropriately. While challenging to do, it is so important to fully identify the study’s contribution to science and articulate for the reader how the study’s findings impact the field.

mHealth: Why do you choose to review for mHealth?

Dr. Heitkemper: I choose to review for mHealth because I believe in the journal’s aims and scope and hope to contribute to its publishing of high-quality research. Furthermore, I think that mHealth will become even more significant and relevant as technology changes and the importance of the issues the journal addresses increase.

(by Lareina Lim, Brad Li)


June, 2024

Stephanie Zawada

Stephanie Zawada is a Ph.D. candidate at Mayo Clinic College of Medicine and Science and a PhRMA Foundation Predoctoral Fellow in Health Outcomes/Value Assessment. Stephanie was a Mayo Clinic-Yale University FDA Center for Excellence in Regulatory Science and Innovation (CERSI) Scholar, an American Heart Association Cardiovascular Collaborative Scholar, a Bastiat Fellow at George Mason University’s Mercatus Center, and a visiting student at Oxford’s Technological Innovation and Digital Health programme. Her research focuses on (1) the design and conduct of decentralized clinical trials for cerebrovascular dysfunction, (2) the development of digital endpoints for phase IV trials, and (3) the translation of digital biomarkers into clinical practice. Her work has been cited by the U.S. Congress and published in numerous journals, including Journal of the American Heart Association, Mayo Clinic Proceedings: Digital Health, Journal of Digital Imaging Informatics, Sensors, Journal of Clinical and Translational Science, and Journal of Allergy and Clinical Immunology.

In Stephanie’s opinion, in clinical science, peer review is a critical part of addressing “the knowledge problem”, namely that information is decentralized and asymmetric. Decision makers, from primary investigators to policymakers, need accurate, precise, and timely summaries to stay up-to-date on relevant best practices and emerging theories. Constructive peer review plays a critical role in guiding authors to polish their initial drafts, ensuring that findings are not overstated and encouraging authors to include sensitivity analyses that probe the falsifiability of hypotheses. Tangentially, peer review functions as a temporal filter for discipline-specific trends, curating the most relevant research at a point in time.

Though I agree new incentives are necessary for the future of academic journals, I disagree with the notion that peer review is a non-profitable endeavor,” says Stephanie. She thinks that peer-reviewed science is integral to securing intellectual property (IP). Depending on the scientific promotion requirements at a given institution, one might be required to secure IP for his/her research. Applying the results of peer review to creating IP is analogous to using carbon filament in light bulbs. One may not reap immediate profit from peer review, but it serves as a future investment in his/her area of expertise.

Seeing the prevalence of research data sharing in recent decade, Stephanie reckons that there are lots of exciting research in machine learning and artificial intelligence right now, as there should be. Yet, the dearth of large datasets in mHealth and digital health research is a colossal barrier to generating robust evidence from data. Publicly sharing data from research articles will not immediately address this issue. For most clinical research projects, publicly sharing data is rarely encouraged, even when deidentified, due to the need to protect participant identities; however, allowance of reasonable requests to share data with researchers and agencies is a sufficient clause and facilitates the development of meaningful mHealth and digital health research.

(by Lareina Lim, Brad Li)


July, 2024

Corinne N Kacmarek

Dr. Corinne Kacmarek is a postdoctoral research fellow at the Veterans Integrated Services Network (VISN) 5 Mental Illness, Research, Education, and Clinical Center (MIRECC) and visiting postdoctoral scholar at the University of Maryland School of Medicine. She completed her PhD in Clinical Psychology at American University, where she worked with Dr. Brian Yates, an international expert in cost-inclusive evaluations of mental health treatment. Her graduate research integrated methodology from applied clinical psychology and economics in order to understand the cost-effectiveness and cost-benefit of mental health treatments. She believes that evaluating costs in relation to outcomes for mental health treatment can ensure investments in mental health treatment delivery systems and guide ways to overcome treatment access barriers. Since starting her postdoctoral fellowship, she has begun exploring sources of health disparities between individuals with and without serious mental illness (SMI), as well as ways to address these disparities. For example, smoking contributes to a major health disparity between individuals with and without SMI. Currently, she is using quantitative and qualitative data to better understand the experiences of SMI patients who smoke cigarettes and understand how mental health providers treat smoking in their SMI patients. She is hoping that data from this project can inform improvements in smoking treatment within mental health settings.

mHealth: What are the limitations of the existing peer-review system? What can be done to improve it?

Dr. Kacmarek: I have been serving as a reviewer for only about 5 years, so my insights may be limited by my inexperience. In my observation, high-quality peer review is time- and energy-consuming, so some may have competing demands and view this time/energy commitment as a drawback that does not outweigh the benefits. It’s hard for me to think of ways to improve someone’s intrinsic motivation to engage in peer review. I have also observed that sources of extrinsic motivation, such as incentive systems (redeemable points, for example) tend to be specific to individual journals, which may increase the length of time required to earn enough points to redeem. Creating a centralized platform across journals to accumulate reviewer points could address this issue, but I think it would be difficult to implement. Continuing to consider peer review in performance evaluations can also help create incentives to engage in peer review. Some early career investigators may also question their expertise and wonder whether they are qualified enough to serve as a peer reviewer. This is something that I have struggled with. Offering workshops about serving as a peer reviewer at conferences, for example, can help demystify the peer-review process and may help early career investigators feel more comfortable agreeing to review.

mHealth: Biases are inevitable in peer review. How do you minimize any potential biases during review?

Dr. Kacmarek: I do my best to acknowledge my biases and, if possible, avoid activating them. For example, I will avoid reviewing authors’ names, degrees, or affiliations if that information is available. If that is not possible, I try to review the manuscript with an open mind by remaining keenly aware of my biases and challenging myself to provide as fair a review as possible; for example, I will ask myself, “Would I find this to be a problem if the first author was from X institution instead of Y, or had A degree instead of B?” or “Have I provided similar feedback on other manuscripts with this methodology?” In addition, I always make a note of the strengths of the manuscript; I do this to not only balance out the suggestions I have for improvement, but to ensure that I am attending to the merits of all manuscripts I have the privilege to review.

mHealth: Peer reviewing is often anonymous and non-profitable, what motivates you to do so?

Dr. Kacmarek: In a world of digitization and big data, it is hard to be private, so I find it refreshing to be anonymous every once in a while! There are a variety of motivators for me. Most importantly, I feel compelled to give back to a system that I have benefited from, and find it intrinsically rewarding to do so. Other motivators include becoming aware of emerging research in my interest areas, staying-up to date on evolving methods and statistics, and having the opportunity to learn new things. Even though I only agree to review articles for which I have relevant expertise, I sometimes conduct additional research on methods, statistics, and subject areas that I am less familiar with.

(by Lareina Lim, Brad Li)


August, 2024

Yunxi Zhang

Dr. Yunxi Zhang is an Assistant Professor of Data Science and the Associate Director of Research at the Center for Telehealth at the University of Mississippi Medical Center. Her recent work focuses on medical and health services research, particularly improving access to quality telehealth services for vulnerable and underserved populations. She applies her methodological expertise in biostatistics and health economics, driven by her interest in leveraging data-driven approaches to enhance healthcare outcomes and promote health equity. Dr. Zhang earned her PhD in Biostatistics from the University of Texas Health Science Center at Houston and her MS in Statistics from the University of North Carolina at Chapel Hill, after completing her BA in Mathematics and Applied Mathematics. She also holds an MA in Higher Education from the University of Mississippi.

Dr. Zhang reckons that peer review is crucial in maintaining the quality and credibility of academic journals. While the process can sometimes be frustrating, often involving long delays, its value cannot be understated. From her experience as a researcher, peer review is not just about finding flaws; it is also about getting a fresh perspective from someone not involved in one’s work. The outside feedback can be precious, helping to clarify ideas, strengthen arguments, and improve methodologies. Even though it is not always fun to receive criticism, especially when it extends beyond the original scope, this process ensures that what gets published possesses clarity, rigor, and depth. She also sees peer review as a collaborative effort. As a reviewer, it is a chance to help others refine their work so that, collectively, reviewers can contribute to advancing knowledge. Therefore, while it might be challenging at times, the benefits it brings to the scientific community make it worth the effort.

In peer review, the biggest limitation Dr. Zhang thinks is the delay, which can sometimes be quite significant. She indicates that finding the right reviewer is not easy and can take several months, and reviewers themselves often have limited time to carefully review the paper. She adds, “However, many journals, including mHealth, are doing a great job by having dedicated persons in the editorial office closely track the process and remind reviewers when their deadlines are approaching.

The reason I choose to review for mHealth was multifaceted. I have been interested in telehealth related research, as I believe it can provide accessible and quality care to patients in need and improve population health, especially in rural and underserved areas. mHealth stands out as a good journal for publishing telehealth-related work. Additionally, I have been impressed with the editorial support at mHealth, which has been instrumental in keeping me on track and ensuring that I have the opportunity to review quality work,” says Dr. Zhang.

(by Lareina Lim, Brad Li)


Septembers, 2024

Andrea Vitali

Andrea Vitali is an Associate Professor at the Department of Management, Information, and Production Engineering in the University of Bergamo, Italy. With over a decade of experience, his research focuses on Digital Human Modeling and computer-aided tools for 3D modeling, particularly in medical and industrial applications. His work includes eXtended Reality applications in tele-rehabilitation, usability analysis in telemedicine, and artificial intelligence for 3D modeling aimed at health, longevity, and ergonomics in Industry 5.0. His current projects emphasize the design of human digital twins and tele-rehabilitation platforms using serious games and generative AI as innovative tools to bridge digital and healthcare solutions for improved patient outcomes and workplace safety. Learn more about him here.

Prof. Vitali reckons that a good reviewer must possess a deep understanding of the subject matter, attention to detail, and a commitment to fairness. Objectivity is crucial, as the goal is to provide constructive feedback that strengthens the manuscript. A reviewer should approach each submission with an open mind, allowing them to critically assess the work without bias. In addition, effective reviewers communicate clearly, providing specific, actionable suggestions that authors can use to refine their work. Finally, time management is essential; timely reviews are valuable for both the authors and the broader research community.

From a reviewer’s point of view, Prof. Vitali believes that data sharing is essential. It enhances transparency, allowing others to validate findings and build upon them, which is foundational for scientific progress. In fields like Digital Health, sharing data can lead to rapid advancements, as researchers worldwide can collaboratively improve methodologies and technological solutions. Furthermore, open data fosters trust within the scientific community and beyond, ensuring that findings are reproducible and verifiable. For these reasons, he encourages authors to adopt data-sharing practices wherever possible.

Balancing peer review with academic and research commitments is indeed challenging. I dedicate specific time slots for reviewing each week, usually during quieter periods when I can focus without interruption. Peer review is part of my responsibility to the academic community, so I prioritize it by treating it as part of my regular workload. What’s more, engaging in peer review is also rewarding; it keeps me updated with recent developments in my field and sharpens my analytical skills, which benefits my own research,” says Prof. Vitali.

(by Lareina Lim, Brad Li)