A Few Actual Uses for ChatGPT and LLM tools

I heartily dislike both the AI-hype and the AI-doom-and-gloom that makes up most of the popular reading on ChatGPT and other large language model tools. These are neither the best thing since the personal computer, nor the thing that will bring higher ed crashing down around our ears. They are tools, and like all tools have appropriate and productive uses, inappropriate and concerning uses, and things for which they are not suited (but that people are trying to use them for anyway.) You can use a chisel to do wood carving, or you can use a chisel to break open a locked door. The latter might be a good thing or a bad thing depending on circumstances. You could also try to use a chisel as a screwdriver, and it might work for some screws, but it's not the best tool and you are likely to hurt yourself if you aren't very careful. (And it's not that great for the chisel, either.)

In my own experimentation, I've come up with some use cases that I think fit into the 'appropriate and productive' category. The one thing that I've found so far is that these tools are most useful for manipulating text. The real 'generative' uses seem to me to be very superficial. 'Produce a [blank] in the style of [blank]' is fun the first few times, but not very interesting overall. And mostly kind of bland (which, as someone pointed out, should only be expected: GPT produces essentially the most likely or “average” text for a given prompt (1).) More like 'produce a [blank] in the style of [blank] on a bad day.'

Here are what I've found useful. I'll add to this as I find new uses.

  1. Translation. ChatGPT levels up translations from the standard translation engines available on the web. Like all machine translation, results are a bit stilted, colloquialisms can be confusing, and less common languages give worse results, but overall, I'm pleased. I suspect that all the translation engines will be incorporating LLMs (if they haven't already) and we should see improvements in the applications soon.

  2. Text mining. I was very excited to find that ChatGPT could extract semantically useful info from narrative text. This is what I did my thesis on, and a very tedious 20K entry dataset it ended up being. I'm eager to start comparing the GPT-generated results to my previous work and to add to my dataset with new entries as soon as I'm satisfied with the quality.

  3. Search assistance. I probably shouldn't have been surprised that ChatGPT could extract semantically useful info from text, since that's exactly what the 'research assistance' apps like Elicit and Consensus do. Those specialize in analyzing research papers and pulling enough info out for you to figure out if the paper might be useful for you. Both are in heavy development right now, but can do things like extract text that is related to an actual question or pulling methodological info out of a selection of studies (population size, analysis techniques, etc.) (2)

  4. Transforming text into machine readable formats. Since ChatGPT can do translation and can also “produce in the style of” it stands to reason that it should be able to manipulate text into a particular format. And it can, at least with formatted citations into bibtex, one of the importable file formats that citation managers like Zotero uses. It would be tedious to do a lot of them at once because of the character limits, but I'm hoping someone will write a script using the API. I had hopes that it might be able to do the same with citations in a spreadsheet (CSV) but since the prompt box is unformatted text and limited characters, I couldn't get it to recognize more than a line or two at a time. It did a reasonable job on those few lines, however. Again, very tedious to do a lot, but it would work and might be suitable for some API scripting.

  5. Audio transcription. I've actually paid for a transcription program call MacWhisper that uses a variety of GPT engines to transcribe audio. It's a cut above the machine transcription available in most of the presentation tools (Zoom, PowerPoint, Google Slides) and it works locally, so it's a bit more private than the better services like Otter. It still has trouble with names, but it has a new feature that lets you put in commonly mis-transcribed words, so I can probably get it to stop misspelling my name and the CINAHL database (sin all, cinder, etc.) MacWhisper is Mac only, but I just saw a project called Audapolis that's multi-platform.

  6. Suggesting new wording. If you've got an awkward paragraph, or want to improve the flow and readability of something, ChatGPT does a decent job. One of the first things I tried with it was inputting a couple of paragraphs from something written by someone for whom English was definitely not their native language. The original was comprehensible, but stilted and had some weird word choices. I'd assume, given the translation ability, that it can do that for other languages as well. The results weren't spectacular, basically just bland American English academic prose, but definitely more readable. If I was doing this for anything real, I'd want to very carefully review the resultant text to be sure nothing had been factually changed.

  7. Avoiding 'blank page syndrome.' Since ChatGPT doesn't do well with producing accurate facts and references that exist, this is better using another tool. I found that Perplexity gave a decent summary of what might be called 'accepted internet knowledge' on a topic, and gives citations – and you can specify what sorts of things you want as references: scholarly journals, newspapers, etc. As I mentioned previously, Elicit and Consensus will both give text extractions and summaries direct from research papers. Any of this could be used to construct an outline or scaffolding to get rid of that blank page. ChatGPT can produce an outline, too, just be sure to thoroughly check any factual claims. Really, do this anyway – just because something is common on the internet doesn't mean it's right (Perplexity) and extracted sentences taken out of context may be deceiving (Elicit and Consensus.) In a way this is the reverse of #6: start with the LLM and have the human produce the final text.

These are all but the last one on the order of “take this text that I give you and do something with it” rather than “produce new text from scratch.” Only the last 2 stray into what could be considered to be a grey area ethically – how much of the writing is LLM-produced and how much person-produced, no matter which comes first?

Anyway, these are what I've found actually useful so far.

(1) https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/ (2) Update (2023-04-23): Aaron Tay has also been experimenting with using LLM's for search, specifically Perplexity, and he keyed in on the ability to restrict the sources used as well. https://medium.com/a-academic-librarians-thoughts-on-open-access/using-large-language-models-like-gpt-to-do-q-a-over-papers-ii-using-perplexity-ai-15684629f02b