Unless you’ve been living under a rock, you know about ChatGPT. The chatbot, driven by artificial intelligence (AI) and created by OpenAI in San Francisco, California, provides eerily human-like responses to user questions (called prompts) on almost any subject. ChatGPT is trained on a vast corpus of text, and its ability to engage in text-based conversation means that users can refine its responses. Even if its initial answers are wonky, it often eventually produces accurate results, including software code.
Researchers can use ChatGPT to debug and annotate code, translate software from one programming language to another and perform rote, boilerplate operations, such as plotting data. A March preprint reported that the program could solve 76% of 184 tasks in an introductory bioinformatics course, such as working with spreadsheets, after a single try and 97% within seven attempts1.
That’s good news for researchers who feel uncomfortable coding, or who lack the budget to employ a full-time programmer — for them, chatbots can be a democratizing tool.
Yet for all their apparent sentience, chatbots are not intelligent. They have been called stochastic parrots, randomly echoing back what they’ve seen before. Amy Ko, a computer scientist at the University of Washington in Seattle, invokes a long-running US quiz show to describe the tool’s limitations, writing on the Mastodon social-media site: “ChatGPT is like a desperate former Jeopardy contestant who stopped following pop culture in 2021 but really wants to get back into the game, and is also a robot with no consciousness, agency, morality, embodied cognition, or emotional inner life.” (The data used to train ChatGPT only extend into 2021.)
In short, ChatGPT and related tools based on large language models (LLMs), which include Microsoft Bing and GitHub Copilot, are incredibly powerful programming aids, but must be used with caution. Here are six ways to do so….. Read the full post on the Nature Careers Website