^NANO www.acsnano.org Best Practices for Using AI When Writing Scientific Manuscripts Caution, Care, and Consideration: Creative Science Depends on It Cite This: ACS Nano 2023, 17, 4091-4093 Read Online 9 j2 ACCESS lihl Metrics & More Article Recommendations Science is communicated through language. The media of language in science is multimodal, ranging from lecturing in classrooms, to informal daily discussions among scientists, to prepared talks at conferences, and, finally, to the pinnacle of science communication, the formal peer-reviewed publication. The arrival of language tools driven by artificial intelligence (Al), like ChatGPT,1 has generated an explosion of interest globally. ChatGPT has set the record for the fastest growing user base of any application in history, with over 100 million active users in just two months, as of the end of January 2023.2 ChatGPT is merely the first of many AI-based language tools, with announcements of more either in preparation or soon to be launched.35 Many in scientific research and universities around the world have raised concerns of ChatGPT's potential to transform scientific communication6 before we have had time to consider the ramifications of such a tool or verified that the text it generates is factually correct. The human-like quality of the text structure produced by ChatGPT can deceive readers into believing it is of human origin.7 It is now apparent, however, that the generated text might be fraught with errors, can be shallow and superficial, and can generate false journal references and inferences.8 More importantly, ChatGPT sometimes makes connections that are nonsensical and false. We have prepared a brief summary of some of the strengths and weaknesses of ChatGPT (and future AI language bots) and conclude with a set of our recommendations of best practices for scientists when using such tools at any stage of their research, particularly at the manuscript writing stage.9'10 It is important to state that even among the authors here, there is a diversity of thought and opinion, and this editorial reflects the middle ground consensus. In its current incarnation, ChatGPT is merely an efficient language bot that generates text by linguistic connections.11 It is, at present, "just a giant autocomplete machine".12 Since ChatGPT is the first of many models that will undoubtedly improve rapidly, within a few years we will almost certainly look back at ChatGPT like an old computer from the 1980s. It must be recognized that ChatGPT relies on its existing database and content and, at the time of writing of this editorial, fails to include information published or posted after 2021, thus restricting its utility when applied to the writing of up-to-date reviews, perspectives, and introductions. Therefore, for reviews and perspectives, ChatGPT is deficient due to its lack of analytical capabilities that scientists are expected to possess and the experiences that inform us. The most important concern for us as scientists is that these AI language bots are incapable of understanding new AI language bots are incapable of understanding new information, generating insights, or deep analysis, which would limit the discussion within a scientific paper. information, generating insights, or deep analysis, which would limit the discussion within a scientific paper. While appearing well formulated, the results are, however, superficial, and over-reliance on the output could squelch creativity throughout the scientific enterprise. AI tools are adequate for regurgitating conventional wisdom but not for identifying or generating unique outcomes. They might be worse at assessing whether a unique outcome is spurious or ground-breaking. If this limitation is true for ChatGPT and other language chatbots under development, then it is possible that reliance upon AI for this purpose will reduce the frequency of future disruptive scientific breakthroughs. This is concerning since a 2023 article has already concluded that the frequency of such disruptive scientific breakthroughs is on a negative trajectory.13 Scientific research is becoming less disruptive—think more cookie cutter and less CRISPR. ■ STRENGTHS OF THE ChatGPT LANGUAGE BOT An Al-driven language bot can (i) help to break mental log jams when writing, or when struggling to write those first words. Having some text to start with can enable a writer to overcome an activation barrier to Published: February 27, 2023 ACS Publications © 2023 American Chemical Society 4091 https://doi.org/10.1021/acsnano.3c01544 AC5 Nano 2023, 17, 4091-4093 ACS Nano www.acsnano.org Editorial productivity. That said, be aware that this starting point might mentally pin you to a certain way of thinking and writing, so do not let this text limit your creativity and insights. A better approach might be to use ChatGPT after completing a draft of your manuscript to provide a complementary perspective to determine if key topics or points were missed, and to spark new ideas and directions. (ii) make interesting analogies, when properly prompted, and generate seemingly creative links between disparate concepts and ideas, although these require a reality check to ensure that they are reasonable or plausible. (iii) be used effectively to improve the title, abstract, and conclusion of your manuscript and to tailor it to match the journal parameters and better match its scope or readership. (iv) identify references for a specific topic that might be missed by conventional literature searches. Reading and including such references can enrich your understanding of a topic area, but they must be carefully read or scanned to ensure that they are correct and relevant. (v) provide a guidance on writing structure by breaking up a difficult topic into smaller pieces. However, the bot might make poor suggestions, so caution is required when doing so. (vi) level the playing field by facilitating composition by non-native English speakers. This language bot and others will almost certainly be included directly in other interfaces, such as Microsoft Office 365.14'15 (vii) help a writer be more thorough when covering a topic by reminding them of aspects they had not considered. (viii) provide knowledge in an area in which one has little familiarity, in a structured, easy-to-digest manner. However, one must keep in mind that the output might be incomplete or lacking in creative insights, as will be described below. (ix) develop code for Python and other computer languages. ■ CONCERNS REGARDING THE ChatGPT LANGUAGE BOT An Al-driven language bot might (i) be fast and easy to use, but perhaps too easy if one fails to use it responsibly and with care. (ii) be used to write and replace critical thinking and thorough literature reviews to the detriment of the user. In the case of students, the writing of their first manuscripts is a transformative training experience. Over-reliance on these language bots deprives them of this opportunity, limiting their intellectual growth and confidence. (iii) lead to banal, cookie-cutter and uninteresting science if not used as only a jumping-off point for creative science. AI tools are typically good at regurgitating conventional wisdom, but weak when it comes to identifying and generating unique outcomes. They can be even worse at judging whether a unique outcome is spurious, anomalous or groundbreaking. (iv) be used without reading the actual papers that support claims made by the author. As mentioned earlier, ChatGPT can invent references or spurious correlations. The output of the AI model cannot be taken at face value; all outputs need to be subjected to critical review to prevent errors, missing key information, or making unrelated claims. ChatGPT might be more likely to generate incorrect information if the available data is incomplete or outdated. (v) fail to provide both sides of controversial topics, particularly without user input. ChatGPT cannot express disruptive concepts. (vi) inherit the built-in biases and falsehoods intrinsic to the scientific enterprise. It can suppress minority views that question or oppose a well-established concept or explanation of a scientific phenomenon, or overlook works with fewer citations arising from intrinsic biases.16 (vii) generate text that is not forward-looking, as it might summarize the consensus without user intervention. Introductions and review papers that are based solely upon the output of ChatGPT will lack thoughtful insights on where a field is headed. (viii) lead to an increase of submissions of perspectives, accounts, and reviews that lack nuance in the storyline and forward-looking discussion since these manuscript types can be easily generated by ChatGPT with the existing information. (ix) generate output that is incorrect or recently shown to be false.8' 7 Outputs can also be manipulated to support arguments with tailored prompts. (x) present major challenges with regards to the reporting of clinically relevant findings that require transparency in outcomes reporting, clear communication of trial designs, and other information.18 Given the important role that publications can play in reporting clinically actionable findings that can drive practice change, the use of ChatGPT in these circumstances might require substantial oversight and disclosure. Our recommendations for the use of AI language bots for scientific communication: To conclude, science operates upon an honor system. While there are now tools to identify text generated by ChatGPT,20 these AI-language bots will continue to improve, both in terms (i) Acknowledge, in the Acknowledgments and Experimental Sections, your use of an AI bot/ChatGPT to prepare your manuscript. Clearly indicate which parts of the manuscript used the output of the language bot, and provide the prompts and questions, and/or transcript in the Supporting Information. (ii) Remind your coauthors, and yourself, that the output of the ChatGPT model is merely a very early draft, at best. The output is incomplete, might contain incorrect information, and every sentence and statement must be considered critically. Check, check, and check again. And then check again. (iii) Do not use text verbatim from ChatGPT. These are not your words. The bot might have also reused text from other sources, leading to inadvertent plagiarism. (iv) Any citations recommended by an AI bot/ChatGPT need to be verified with the original literature since the bot is known to generate erroneous citations. (v) Do not include ChatGPT or any other Al-based bot as a co-author.10'19 It cannot generate new ideas or compose a discussion based on new results, as that is our domain as humans. It is merely a tool, like many other programs, for helping with the formulation and writing of manuscripts. Please refer to ACS Nano author guidelines for more information. (vi) ChatGPT cannot be held accountable for any statement or ethical breach. As it stands, all authors of a manuscript share this responsibility. (vii) And most importantly, do not allow ChatGPT to squelch your creativity and deep thinking. Use it to expand your horizons, and spark new ideas! 4092 https://doi.org/10.1021/acsnano.3c01544 ACS Nano 2023, 17, 4091-4093 ACS Nano www.acsnano.org Editorial of their performance and sophistication, and thus scrutinizing their use will be increasingly difficult. Please use these tools with extreme care and remind your colleagues and co-authors of the concerns and best practices when writing your own manuscripts. Ultimately, because scientific papers rely on Ultimately, because scientific papers rely on human-generated data and interpretations, the scientific story requires creativity and knowhow that will be difficult to replicate using Al-based language bots. human-generated data and interpretations, the scientific story requires creativity and knowhow that will be difficult to replicate using Al-based language bots. Julian M. Buriakeorcid.org/0000-0002-9567-4328 Deji Akinwande orcid.org/0000-0001-7133-5586 Natalie Artzi © orcid.org/0000-0002-2211-6069 C. Jeffrey Brinker orcid.org/0000-0002-7145-9324 Cynthia Burrows orcid.org/0000-0001-7253-8529 Warren C. W. Chan©orcid.org/0000-0001-5435-4785 Chunying Chen 5orcid.org/0000-0002-6027-0315 Xiaodong Chen©orcid.org/0000-0002-3312-1664 Manish Chhowalla orcid.org/0000-0002-8183-4044 Lifeng Chi©orcid.org/0000-0003-3835-2776 William Chueh © orcid.org/0000-0002-7066-3470 Cathleen M. Crudden orcid.org/0000-0003-2154-8107 Dino Di Carlo © orcid.org/0000-0003-3942-4284 Sharon C Glotzer orcid.org/0000-0002-7197-0085 Mark C. Hersam orcid.org/0000-0003-4120-1426 Dean Ho © orcid.org/0000-0002-7337-296X Tony Y. Hu© orcid.org/0000-0002-5166-4937 Jiaxing Huang©orcid.org/0000-0001-9176-8901 Ali Javey© orcid.org/0000-0001-7214-7931 Prashant V. Kamat orcid.org/0000-0002-2465-6819 Il-Doo Kim©orcid.org/0000-0002-9970-2218 Nicholas A. Kotov© orcid.org/0000-0002-6864-5804 T. Randall Lee © orcid.org/0000-0001-9584-8861 Young Hee Lee © orcid.org/0000-0001-7403-8157 Yan Li©orcid.org/0000-0002-3828-8340 Luis M. Liz-Marzän © orcid.org/0000-0002-6647-1353 Paul Mulvaney & orcid.org/0000-0002-8007-3247 Prineha Narang orcid.org/0000-0003-3956-4594 Peter Nordlander orcid.org/0000-0002-1633-2937 Rahmi Oklu© orcid.org/0000-0003-4984-1778 Wolfgang J. Parak orcid.org/0000-0003-1672-6650 Andrey L. Rogach orcid.org/0000-0002-8263-8141 Mathieu Salanne orcid.org/0000-0002-1753-491X Paolo Samori© orcid.org/0000-0001-6256-8281 Raymond E. Schaak© orcid.org/0000-0002-7468-8181 Kirk S. Schanze © orcid.org/0000-0003-3342-4080 Tsuyoshi Sekitani orcid.org/0000-0003-1070-2738 Sara Skrabalak© orcid.org/0000-0002-1873-100X Ajay K Sood© orcid.org/0000-0002-4157-361X Ilja K. Voets© orcid.org/0000-0003-3543-4821 Shu Wang © orcid.org/0000-0001-8781-2535 Shutao Wang©orcid.org/0000-0002-2559-5181 Andrew T. S. Wee orcid.org/0000-0002-5828-4312 Jinhua Ye © orcid.org/0000-0002-8105-8903 ■ AUTHOR INFORMATION Complete contact information is available at: https://pubs.acs.org/10.1021/acsnano.3c01544 Notes Views expressed in this editorial are those of the authors and not necessarily the views of the ACS. ■ REFERENCES (1) https://openai.com/blog/chatgpt/ (accessed February 2, 2023). (2) ChatGPT Sets Record for Fastest- growing User Base - Analyst Note; https://www.nasdaq.com/articles/chatgpt-sets-record-for-fastest-growing-user-base-analyst-note (accessed February 2, 2023). (3) China's Baidu to Launch ChatGPT-style Bot in March - Source; https://www.reuters.com/technology/chinas-baidu-launch-chatgpt-style-bot-march-source-2023-01-30/ (accessed February 2, 2023). (4) Google is asking employees to test potential ChatGPT competitors, including a chatbot called 'Apprentice Bard'; https:// www.cnbc.com/2023/01/31/google-testing-chatgpt-like-chatbot-apprentice-bard-with-employees.html (accessed February 2, 2023). (5) Anthropic, an A.I. Startup, is Said to be Close to Adding $300 Million; https://www.nytimes.com/2023/01/27/technology/ anthropic-ai-funding.html (accessed February 2, 2023). (6) Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach; https://wvw.nytimes.com/2023/01/16/technology/ chatgpt-artificial-intelligence-universities.html (accessed February 2, 2023). (7) Abstracts Written by ChatGPT Fool Scientists; https://www. nature.com/articles/d41586-023-00056-7 ( accessed February 2, 2023). (8) Did ChatGPT Just Lie To Me? https://scholarlykitchen.sspnet. org/2023/01/13/did-chatgpt-just-lie-to-me/ (accessed February 2, 2023). (9) Tools such as ChatGPT threaten transparent science; here are our ground rules for their use; https://www.nature.com/articles/ d41586-023-00191-l (accessed February 2, 2023). (10) ChatGPT is Fun, But Not an Author; https:// www.science. org/doi/10.1126/science.adg7879 (accessed February 2, 2023). (11) ChatGPT has Convinced Users that It Thinks Like a Person. Unlike Humans, it has No Sense of the Real World; https://www. theglobeandmail.com/opinion/article-chatgpt-is-a-reverse-mechanical-turk/ (accessed February 2, 2023). (12) Grimaldi, G; Ehrler, B. AI; et al. Machines are About to Change Scientific Publishing Forever. ACS Energy Lett. 2023, 8, 878— 880. (13) Papers and Patents are Becoming Less Disruptive Over Time; https://www.nature.com/articles/s41586-022-05543-x (accessed February 2, 2023). (14) Microsoft Is Looking to Add ChatGPT To Office 365; https:// www.forbes.com/sites/quickerbettertech/2023/01/15/microsoft-is-looking-to-add-chatgpt-to-office-365and-other-small-business-tech-news-this-week/?sh=65366dfb6f96 (Accessed February 2, 2023). (15) Microsoft Bets Big on the Creator of ChatGPT in Race to Dominate A.I.; https://www.nytimes.com/2023/01/12/technology/ microsoft-openai-chatgpt.html (Accessed February 2, 2023). (16) Lerman, K.; Yu, Y.; Morstatter, F.; Pujara, J. Gendered Citation Patterns Among the Scientific Elite. Proc. Nat. Acad. Set 2022, 119, 206070119. (17) Disinformation Researchers Raise Alarms about A.I. Chatbots; https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html (Accessed February 9, 2023). (18) For example, ensuring compliance with Consolidated Standards of Reporting Trials (CONSORT) guidelines. (19) ChatGPT Listed as Author on Research Papers: Many Scientists Disapprove; https://www.nature.com/articles/d41586-023-00107-z (Accessed February 2, 2023). (20) https://gptzero.me/. 4093 https://doi.org/10.1021/acsnano.3c01544 ACS Nano 2023, 17, 4091-4093