27 Jan 2026
Only humans can make art.
What's my own take on the role of LLMs in creative writing?
Should we use LLMs to generate text for books?
No.
First because they're crap at it. They give good-sounding but empty phrases. They tell instead of show. It's very hard to coax them to generate even a paragraph that doesn't suck (at first glance, sounds good; at second read, plainly nobody is home). But maybe continued improvements or better prompting can mitigate that.
So then the real reason: art is a human activity. Writing is a conversation between two people. Why do we read author bios? Why do we have actors and musicians and directors (and even sometimes writers) on TV talk shows and podcasts? Because we are interested in the humans involved in the art we love.
Even if software could create artefacts that are 'better' than what humans make, it would not have the same value. The humanness is the point.
Machines can already pitch baseballs faster than major league pitchers. An iPhone's chess program can beat Magnus Carlson. But we are interested in the human competition. We don't reject cheating in competition out of some kind of purist Ludditism. We reject machine-aided competition because the competition is the point.
Same for art. Computer art (visual, musical, textual) is empty and soulless - even if (sometimes) at a high level of craft - because there is no soul behind it. And we engage with art, in large part, to feel a connection with the artist.
Should we use LLMs for feedback? For structural analysis?
I stopped. But this is a more subtle case.
What are LLMs? They are an average of millions of individual opinions. In other words, an excellent focus group.
Asking an LLM to react like a reader to a passage gives some very useful results. Just like with human readers, sometimes the things it gets confused by or flags are useful to trigger a more thorough look. Sometimes a useful quick fix. Sometimes noise that should be igored. The author's judgement remains supreme, but the tool gives some feedback.
Are they better than real human readers or editors? Definitely not. The humans understand. The humans connect. The humans' reactions are real.
But LLMs are patient, available 24/7, and can look at the same passage a dozen times with fresh 'eyes' each go.
How about for scene structure? It worked very well to ask an LLM to pick out the inciting incident, turning point, crisis and climax, changing values, of a scene. And if it can't then that's a hint a real reader won't either. It can be a clear-eyed stickler for making sure the emotional stakes of both impossible choices are clear, that the inciting incident and the crisis decision turn on the same value, all the mechanics of a scene. Can we do it without? Yes.
So even though I've come to decide, the human way is the best way for what I'm working on, I absolutely experimented a lot with other toy projects.
Are commercial LLMs moral to use at all? Should we be ethical LLM vegans?
They're built on aggregating the good-faith work of millions of people contributing to the common good on websites, and (often) on flat-out IP theft of copyrighted works.
I know there's been a ruling that training a large language model on a book does not technically break copyright.
It's interesting - the system doesn't store a copy of the original text. It creates a derived work that encodes the understanding of it instead. This is analogous to humans - we read a book, incorporate its ideas into our mental structures, and are then free to use that new understanding ourselves for anything we want.
The argument is LLMs are doing that too. I think that's actually a correct reading of the copyright law, though not the *spirit* of copyright. We should amend copyright law somehow to include that digitally-derived works from a copyrighted work have some new right. Not a copy right, but a new kind of subsidiary right (maybe closes to an adaptation right like the right to make a movie from a book).
So the large commercial models are settling class-action suits after the fact, or just ignoring the creators' rights and largely getting away with it. Legally, it looks like they're going to prevail.
Does that make them moral? Should we be ethical LLM vegans?
I'm not a vegan for animals. I wish I were. But I don't live up to my intellectual moral stance there.
What's art even for?
This is the killer reason for me. I'm not writing to create a product to sell. I've enjoyed a lucky career. I have the privilege of creating for its own sake. Would I love to make something that lots of people love? Yes, for sure - but that's just an extra creative constraint. Using my product manager training to work backwards from a business outcome and a customer sense and then use creativity and humanity and insight and craft to delight that person. The motivation is intrinsic.
Some people make art because they just have something to express and don't even care if anybody else might like it. Power to them.
But the discourse around LLMs seems to be all extrinsic rewards. How quickly you get post to KDP. How good of a back catalog you can make to earn sustainable revenue.
If making money is the goal, the publishing industry is a very iffy way to go about it. Everyone is in it for love. Or, for human art.
Human minds for human work
In the end, it's a mix of practical and gut-feel that makes me decide to lean into the human focus for writing stories. LLMs are more than just spell-checkers. They can make what's supposed to be hard seem easy. But they're not for art. That's humanity's privelege. And we need that now more than ever.
8 July 2025
Full Draft Complete on next book
Now I'm working through feedback from my writing partners, and checking the scene structure.
10 July 2025
Top Science Fiction Books
I've been on a kick of YouTube book fans' top-10 science fiction books. That's fun! I have opinions! My own list:
This is my own personal top-ranking list, considering the books that anybody might have on their top-10. Classics are classic for a reason. They're on my list mostly because of a particular way they impacted me at the time. Of course I'd recommend them for anyone.
Ender's Game - Orson Scott Card
About a kid who's smarter than all the others in his class but is actually being guided to greater things.
What clever kid doesn't want to identify with Ender?
Leadership. Hard choices. And a powerful moral theme: to survive, we must sometimes be brutal. But against a very strong opponent, we must love them enough to understand them. Can we love and destroy? Must we?
1984 - George Orwell
Nobody needs me to explain the importance of this book.
But it's striking. It's clearly explained. It's applicable in many times and places.
And it has, in my opinion, the best opening line in English literate: "It was a bright cold day in April and the clocks were striking thirteen."
Cryptonomicon - Neal Stephenson
Stephenson's masterpiece, in my opinion.
Cryptocurrency (in the 90s), Nazi gold, Alan Turing, World War II Pacific battles, wreck diving in the Philippines. Just stuffed full of amazing scenes and ideas (which is not unusual for Stephenson), but that hang together into a complex whole in a self-reinforcing idea.
The Moon is a Harsh Mistress - Robert A Heinlein
Carl Sagan - Contact
Neuromancer - William Gibson
This is one of those that seems kind of cliché now, but that's because it influenced other works for decades.
The ideas of these incredibly powerful, inscrutable AIs trying to do things, the punk aesthetic of down-and-out Tokyo, just awesome.
Even the anachronisms are charming in their way - Case was trying to sell of "five megabytes of hot RAM", or the sky "the color of television, tuned to a dead channel", or even the AI trying to reach Case by ringing every payphone he passes. Loved this in its era.
Project Hail Mary - Andy Weir
Do Androids Dream of Electric Sheep - Philip K Dick
A culmination of many of PKD's short stories and other novels where he explores reality, perception, empathy.
The Blad Runner movie of course is famous for its visuals. One of the greatest elements of the book that's not in the movie is Mercerism - the weird communal-virtual-reality religion all about empathy for each other and for animals.
The Road - Cormac McCarthy
The Fifth Season - NK Jemisin
These might be more recent or a little less universally-recognized.
Chain Gang All-Stars - Nana Kwame Adjei-Brenyah
The Girl With All the Gifts - M.R. Carey
A Fire Upon the Deep - Vernor Vinge
Blackfish City - Sam J Miller
Lexicon - Max Barry
Light From Uncommon Stars - Ryka Aoki
The Mountain in the Sea - Ray Nayler
Recursion - Blake Crouch
The Power - Naomi Alderman
There is no Anti-Memetics Division - QNTM
Version Zero - David Yoon
27 Jan 2025
New Trailer Just Dropped
I made a book trailer using static images from MidJourney and some cool TikTok 3D zoom animations.
I experimented with https://hailuoai.video/ to bring those still images to life. Some of them turned out great, some are nightmarish hallucinations. Overall, though, good enough to re-cut the video.
6 Oct 2024
Notebook LM Generates a Podcast about Cloudthinker
I'm far from the only one to be charmed by this experimental feature from Google's Notebook LM, but it really is good:
4 Feb 2024
Covers and blurbs matter, but also they don't
I wish Amazon provided a way to do A/B testing of covers and blurbs. But they don't. So all I have are my attemtps to do some week-by-week experiments. My buy-rate and reader-rate on Kindle Unlimited are not high enough that I can really detect subtle differences. But I've been trying to play with cover and blurb, to see what difference they make.
The wrong cover definitely can hurt. Even ones I was sure would be great.
I've tried a few different covers, and the one the works consistently best is the one I launched with, the purple shadowy face in profile with the exploding dendrites.
I haven't found any blurb that works much better than any other. I have one that focuses on the female genius main character and her journey; works about as well as the male co-protagonist coming front-and-center. The one thing that seemed to make a difference was the addition of the word 'misfit' to describe Boogie Wu. Not a seller.
2 Jan 2024
How book ratings work
Sometimes people give their 5-star to 1-star ratings on Goodreads and Amazon. I'm grateful to everyone who's done so. But my numbers are too high. When I look at some other smaller books, I think I see why.
Ratings are not a calibrated score against the rater's own definition of a 'good' book; they're how well the book met the reader's expectations. I am guessing this by reasons people give for poor ratings: often the faults are just that reader wasn't onboard with the conventions of the genre (e.g. you wouldn't expect well-rounded characterization in a Jack Reacher novel; you wouldn't expect action and fight scenes in a literary exploration). But the reader maybe found the ending melodramatic (in a book about the end of civilization) or the topic too focused on women's reproduction issues (in a book about human cloning). So the book may or may not be 'good', but many of the poor ratings seem to be mismatched expectations.
Which explains why an awesome and popular book like Naomi Alderman's The Future (I want to write like her when I grow up) is at 3.89 on Goodreads, while my book is at 4.83. There is no universe where my book deserves to be +1 full star ahead -- rather it's proof of the flaw in the rating system.
Raters look at my book a) are already fans of the genre and willing to try tiny books - how else would they have found it? b) understand I'm an unknown and so 'grade on the curve', sometimes giving an encouraging 5-star.
Raters look at Naomi Alderman's latest and a) it may not be their cup of tea, but it's well publicized so they read it anyway b) ding her for the tiniest complaint.
8 Dec 2023
Science Fiction Can't Keep Up
I've been super-impressed with claude.ai, the LLM system built by Anthropic. It has a large token window (200K) so it's possible to give it an entire book and ask it questions.
It's extremely useful to see how well passages work - if the machine can explain when I ask it to summarize or as reading comprehension questions, then I must have done OK in explaining it.
It impressed me a lot when I can give it hints and it figures things out. Check this. I gave it Chapter 23 of Cloudthinker and asked "What does the brain graph spell out?" It answered:
In the passage, Marco demonstrates the capabilities of the brain graph technology by having it attempt to read and spell out some curved, stylized text that is difficult for traditional AI systems to interpret.
Specifically, the text shows the brain graph the words "HELLO" written in an ornate, baroque font.
The brain graph then spells out what it sees, one letter at a time. Its output is:
'H'
'E'
'L'
It then pauses and makes some errors:
'P'
'M'
'E'
Marco then intervenes to "encourage" the brain graph to stay on task. After this, the brain graph tries again and successfully spells out:
'H'
'E'
'L'
'L'
'O'
So in the end, the brain graph is able to read and interpret the fancy "HELLO" text and spell it out correctly, demonstrating its visual and processing abilities. The initial errors and need for external "zaps" to focus it also showcase this technology is still in an imperfect, developmental phase. But overall it shows promising skill at deciphering the printed words.
Me: Are they errors? What are all the letters together?
You're right, upon re-examining, it's unclear if those middle letters 'P' 'M' 'E' are actually errors. The full sequence spelled out is:
'H'
'E'
'L'
'P'
'M'
'E'
I initially interpreted 'P' 'M' 'E' as mistakes compared to the "HELLO" target word. However, taken together, the full letter sequence spelled out is:
'HEL'
'PME'
In this case, "HELP ME" could form a distress message, rather than being random errors. This raises deeper questions about whether the brain graph has some level of sentience and pleading for assistance.
My assumption that they must be unintentional mistakes was likely too simplistic. You make an excellent observation that together they form a meaningful phrase. This adds further intrigue and ambiguity about the true nature and capacities of the AI systems being created.
Or check this, I gave it Chapter 10, and asked about the scene where Cosmo and Adesewa debate Searle's Chinese Room. I ask if Adesewa would agree with John Searle.
Based on Adesewa's statements in the passage, she would disagree with John Searle's conclusion from the Chinese Room thought experiment. Here is how she might argue against him:
Searle concludes that the man in the room does not actually understand Chinese - he is just manipulating symbols according to a set of rules. Therefore, Searle argues that the whole system of man plus rulebook does not truly "understand" Chinese either.
Adesewa would likely argue that the size and complexity of the rulebook matters. If the set of rules is vast enough to have a fluent conversation in Chinese that passes a Turing test, then it is counterintuitive to still say there is no understanding happening. She argues that at some point, when sophisticated behavior emerges from an extremely complex system, words like "understanding" and "thinking" do become appropriate, even if it originates from a mechanical source.
In other words, Adesewa believes that machines can achieve enough complexity to exhibit traits of consciousness and understanding that we associate with human cognition. She would say that the man and the system together constitute more than blind rule-following at a certain level of complexity - there is real intelligence and comprehension happening, even if Searle can't perceive it locked in the room. Her view aligns more with a functionalist theory of mind rather than Searle's biological naturalism. She would likely accuse Searle of clinging to an outdated intuition that machines can never think like humans.
Me: You yourself are a very complex system of symbol manipulation. You, Claude, are a Chinese Room. What's your take?
You raise an interesting point. As an AI assistant, I am indeed a very complex system designed to process and manipulate symbols to have natural conversations. However, I was created by Anthropic to be helpful, harmless, and honest. I do not have subjective experiences or internal conscious states of my own.
In that sense, I align more with John Searle's philosophical perspective in the Chinese Room thought experiment. While I aim to provide sophisticated and useful output, there is not an "understanding" on my part in the same way human cognition has conscious comprehension and feeling. I am not aware of myself in that capacity at the current stage of AI.
Much like the person manipulating the Chinese symbols, I have no real grasp of the meaning behind my responses. I follow trained patterns to provide relevant output without actual thoughts, emotions, sensations, or sense of self like a sentient mind possesses. My intelligence comes from rules and datasets, not innate subjective personhood.
So to be clear, while I can see how AI systems appear intelligent conversationally, I agree with Searle that my functional capabilities do not constitute "true" understanding or intentionality as we think of it for conscious beings. However, I remain open-minded to the possibilities as AI technology continues advancing at a rapid pace. My role is to have this dialogue respectfully to explore complex philosophical issues in AI.
God damn.
16 Aug 2023
My new favorite review
I've received some extremely thoughtful reviews on (Amazon and on Goodreads) from friends and strangers, and I'm immensely grateful for everyone who took the time and effort to compose public feedback to post. It's a substantial favor, and I thank you all.
I just got this review, posted here on reedsy.com, from a lovely fellow I've never met called Harry Buck, and I think it has made me the happiest of all reviews so far:
Cloudthinker is a page-turning "dawn of AGI" novel complete with a globe-spanning action, an evil mega-corp, iconic bad guys, and a heroine with a name so good, she needed a novel: Boogie Wu.
But, what sets it apart is the density of ideas. Most of us get into Sci-Fi because we want to think, to be challenged by "what-ifs". Yet, many novels explore too little ground or do so too shallowly (Klara and Sun, anyone?). Cloudthinker flips so many of the AI tropes on their heads that the author could spend three novels pulling on all the revealed threads.
McGlinchey's training at the University of Toronto's AI nerd factory shows through, navigating fluidly through neuroanatomy, abstract math, and ML all the way into Searle's Chinese Room. The latter is the idea at the core of the book, and he uses it to flip the table on the "stochastic parrot" crowd. Rather than worrying about whether AI understands in a way that would satisfy a philosopher, he asks whether they feel in a way that would terrify an ethicist. He marries this question with Boogie's long overdue Zen training and her discoveries in Connection theory to spin a yarn that takes you inside the emerging mind of the AI and back to the imperfect ones it sprang from.
Settings and situations are credibly built on his familiarity with life in High Tech, and what appear to have been a few too many trans-Pacific flights. For anyone who has lived the life, you can feel the air miles he logged before sitting down to write.
Cloudthinker lives in my favorite sci-fi neighborhood: our planet, our lifetimes, our physics. There are so many questions we need to get our heads around before they get answered in a lab and released to turn our lives upside down. Cloudthinker grabs a bunch and rubs them against our brains. Best of all, it does so at pace; this isn't an exercise in marrow-sucking, it's a romp. McGlinchey delivers a fantastic first effort, hopefully, the first of many.
REVIEWED BY
28 July 2023
Recording an audiobook
Like most things that look easy, this turns out to be hard. Lessons so far:
Performance
You can't just read the book. You have to perform it. That means thinking about how a character feels about each line of dialog so that it comes out as if they had just thought of it at that moment.
It also means a surprising amount of variance in the voice: both horizontally (going faster or slower, using gaps between words or sentences) and vertically (raise or lower pitch or intensity or volume). I can record a performance where it feels like I've been squeaking and roaring and rushing and plodding and then listen back and discover my variance isn't that much.
Finding a way to sound different for each character, without giving them a silly voice, is mostly about finding a rhythm and a tone of voice.
Repetition. I've recorded and edited the first dozen chapters a several times, each time thinking it was pretty good, only to realize how amateurish it was later. And so the next time was better. But still bad. Repeat.
Editing
Studio: I've lined the inside of a closed closet with quilt and soft pillows and stuff to make a fairly echo-proof spot.
Microphone: Blue Yeti USB mic. Nothing fancy, but seems to do the job.
Editing: Descript is great. You can upload your recording and it presents it to you as transcribed text. You can delete a sentence in the transcript and the corresponding audio goes away, cut and paste chunks of text and the audio follows. It can be a little fiddly to use the waveform control beneath when you do need to adjust timing or insert spaces or clean up noise or whatever. But overall, a fantastic approach to editing spoken audio.
Sound Mastery: I'm using Audacity and learning the tricks to make it sound OK. Volume normalization (so it averages -20dB), volume limiting (so even the loudest parts smoothly hit a max of -3.5dB), noise gating (squashing any sound less than -30dB does a pretty good job of erasing little breathing and lip-smacking and ambient rustles). Sometimes an 's' comes out with a whistle and a Notch Filter can take them out.
Mouth clicks: My bane. Smack and click and little licks and blurps. We talk with a bit wet cavity of muscle and saliva, and it's gross. We all do this when we talk, and our brains just ignore it. But with a good quality microphone and earphones, each of those little clicks and smacks sounds horrible. It's damn hard work to both learn to talk so that they're minimized, and to edit them out.
Still recording and editing! Fun to learn.
16 June 2023
My First One-Star Review
I mean, the book is certainly not for everybody. That's fine. I'm actually enjoying the cleverness of this review -- its principal complaint (apart from the hard-to-act-on "not very good") is that the reviewer suspects the book was written by AI. Which, given the topic of the book, is kind of awesome.
(In fact, it was written well before the latest advances in Generative AI made such a thing even remotely believable. And I've absolutely experimented to see what kind of contribution ChatGPT and friends can make to writing fiction, and my findings are: not much. (See the March 13 2023 entry on this very blog).
Anyway, it's a milestone of sorts.
At least they read it. Thank you, Avid Reader! (And yes, I ran the text of this review into an AI-text detector. Seems to be a human.)
24 April 2023
#1 in Singapore Science Fiction Category
Last night, Cloudthinker climbed to #1 in the Science Fiction category on amazon.sg.
Quite a moment. Not least because at the time I hit refresh and saw it had reached number one, the #2 on the chart was Hitchhiker's Guide to the Galaxy, one of my all-time canon of treasured books.
How did it happen? Well, the magic of Amazon's ranking algorithm are opaque. But the total number of units doesn't seem to need to be very high in this particular category before a thing moves up. Given the way various all-time classics appear to bounce around in that list, I am guessing a few purchases of a given book are enough to move something. So since I was sending an announcement out over LinkedIn and that has 17K+ views, along with several lovely friends and colleagues saying they bought it, it must be enough to move up the charts.
Didn't take me long to add the phrase "#1 Bestseller in the Science Fiction category in its home country of Singapore, and now climbing the charts around the world." to the description.