April
 

home

about us

calendar

readers' page

articles

web archives

print archives

contact us

support us

Machine Lit
Sudeepto Roy | Klish Way

Two esteemed Sandpiper editors recently inquired about GPT-3, an AI (artificial intelligence) program that purports to author human-like text. Since the very beginning of modern computing, pioneers such as Alan Turing, the British computer science pioneer who broke the Nazi Enigma Code, grappled with the question of “can machines think?” There is even a test for AI efficacy, eponymously known as the Turing Test, to see if a computer’s predictions and actions can be deemed as indistinguishable from that of a human. Let us try this test on this article. Exactly four sentences have been generated using an AI model. Can you spot them?

While AI has progressed since 1950s into many branches of inquiry, such as machine learning, robotics, and natural language processing (NLP), one branch of NLP deals with text processing. The obvious uses of such techniques include real-time translation, article or book summarization, natural chatting (employed very widely for customer service on websites), detection of academic plagiarism, etc. One major advance, known as GPT (Generative Pre-trained Transformer), has recently occurred in the field of authoring. It comes from OpenAI, a for-profit AI-research laboratory based in San Francisco.

GPT will revolutionize authoring of articles. GPT aims to provide high quality authoring tools to all professional researchers. Such tools will help to boost the system’s global adoption among academic publishers. As such, it needs an open mind on the the strengths of open access as a publishing tool.

As with much of computing, every major advance has been met with equal doses of wonderment and derision. Will such AI-bots take away writing jobs? Will they make human authors redundant? Will this contribute to more fake news? While these are all legitimate concerns that deserve the full weight of ethical, moral, economic, and legal scrutiny, let me propose a viewpoint of collaborative advancement.

As an engineer, for instance, I am required to consume vast amounts of technical and policy literature, that are published worldwide at a torrential pace, and often in languages I am not familiar with. A program that generates concise summaries of published material and latest inventions, highlighting the problems they solve and those that they don’t, would be very informative, leaving time to focus on the truly creative aspects of my work, pertaining to design, usability and utility. Just as spelling and grammar checkers, GPT-like programs would be added to the productivity toolkit of human authors.

As for Turing’s original question, I turn to a definition of MIT Sloan School of Management Professor Thomas Malone’s definition of AI as “machines acting in ways that seem intelligent.” “Seem” is the operative word here. For instance, GPT does not care about why it authored some text in the first place. It is still soulless.

 

 

© 2007-2020 Del Mar Community Alliance, Inc.  All rights reserved.