Secure Remote Mains Switch, part two (January 2025)
Secure Remote Mains Switch, part two (January 2025)
The Fox Report
Barry Fox’s technology column
How to avoid being fobbed off by an AI bot
E
very day, it seems we get
a new Artificial Intelligence system
such as ChatGPT from OpenAI, Copilot from Microsoft, Gemini from Google
or Meta AI from the owner of Facebook,
Instagram, WhatsApp and Messenger.
Each variant is hailed as the currently best
or upcoming solution.
AI used to be all about the Large Language Models (LLMs), on which tools like
ChatGPT depend. Now we also have Small
Language Models (SLMs), which are, err,
smaller that LLMs.
LLMs are computer machines (in the way
Alan Turing used the word, rather than
mechanical engines) that can understand
and create human language text. They are
‘trained’ by analysing vast amounts of raw
data (text, images, calculations) and then
use what they have learned to handle a
wide range of ‘mental’ tasks.
LLMs understand questions in any
language and concoct answers based on
past information. They can translate to
and from foreign languages, summarise
big documents in a few words and write
essays and reports. They are also increasingly used to replace humans in customer
care roles.
Sometimes the service system makes it
clear that callers are dealing with a chatbot
(they are given ‘friendly’ robot names like
Cora or Clara). However, other times, the
service tries to fool callers into thinking
that they are talking to a human.
SLMs are smaller because they are more
specialised and trained to do specific jobs.
They can, for instance, focus on healthcare
or specialist technical help. For example,
telling a patient what pills to take, or advising an engineer on what screw to turn
and how far to tighten it. That can be much
easier and quicker than poring through
complex manuals, trying to decipher the
prose and figure out what to do in this
particular situation.
Nvidia (who became huge on the back
of making 3D rendering engines for gaming) is getting richer by the minute. That’s
because it invested in the design of chips
that all forms of AI depend on. The new
‘Blackwell’ chips are expected to cost
around US$40k (~£30,000) each.
Practical Electronics | January | 2025
Meanwhile, more than 13,000 householdname authors, musicians, and actors have
signed a joint statement on AI training
which declares, “The unlicensed use of
creative works for training generative AI
is a major, unjust threat to the livelihoods
of the people behind those works, and
must not be permitted” (https://pemag.
au/link/ac27).
The gist of the complaint is that AI
model-building needs raw data to train the
system, which is obtained by ‘scraping’ it
from the Internet, and no-one is yet paying
a fair price for that data. Bodies behind
the complaint, such as Fairly Trained, the
non-profit group set up by UK composer Ed
Newton-Rex, want world-wide agreement
on a way to get payment for the scraped
data. They also want to block the scraping
of data unless payments are made.
That will not be easy, and it will be
even less easy to control the future use
of data that has already been scraped and
has already been used to train all the AI
systems that are already available.
Still, surely it is possible to implement
such a system. Google is already offering a
watermarking system that lets exam boards
and business employers check for text that
has been generated by Gemini AI.
For what it’s worth, I have now temporarily given up on trying every new AI system
that clever friends tell me is definitely now
the best. I don’t yet trust any of them not
to try to be helpful (or save machine face)
by making up what they don’t really know.
For now, I just stick with basic Google
searching, which is essentially using a
sorted index of the Internet.
I give Google a few carefully chosen keywords and ignore all the obvious paid-for
results and clickbait, while sifting through
the mixed bag of what’s left, judging its
relevance to me based on my personal experience and doing some separate searches
for cross-checking. This takes longer than
asking an AI system a complicated question,
but I don’t want any ‘thing’ making my
relevance judgements. Next year, perhaps.
Likewise, I try to avoid company chatbot
helplines because they are usually a pain.
But avoidance gets harder all the time. So
far, it is typically still possible to deal with
such bots by carefully drafting a detailed
question off-line and then copy-and-pasting
it into the query box. Sometimes, that alone
may produce a useful reply.
More often, it’s necessary to go on pasting
the same pre-prepared question in every
box space offered until the bot gives up and
passes the query to a human. We then only
have to hope that the human can help (if
they’re only allowed to work from scripts,
they’re about as much use as a bot…).
This AI-avoidance tactic is the modern
equivalent of repeatedly keying star or
hash codes into an automated phone line
until an operator jumps in. It is obviously
time-wasting. Here’s a challenge for readers:
how best to draft an online query or email
that an AI help bot immediately knows it
can’t handle and needs to refer to a human?
Here’s my early attempt at bypassing
AI – successfully tested recently during
correspondence with a supermarket about
rotten food. Attach or embed an image
that contains a key fact of the matter. For
instance, refer in the text body to a scanned
image of a receipt. The worse the scan
quality the better, as long as it is legible
to the human eye.
Then refer in the text body to key information contained only in the receipt, such
as amount paid, product description and
date of payment – without including that
information as text.
Of course, it is possible to extract text
from an image, but optical character recognition (OCR) needs to be initiated, and
the result interpreted and correlated with
the main body text. OCR can fail with
low-quality images (one of the facts those
incredibly annoying “Captchas” rely on).
For the time being, at least, it will be
easier just to get a human to read the text
and look at the image.
This is another good reason for sometimes
putting complaints to a company in a snail
mail letter (now that faxing is no longer
the easier option). Enclose a hard-copy of
a receipt with the letter and refer to it in
the text. To automate a reply, the company
will have to scan and OCR both the letter
and receipt, and correlate the two. It will
very likely just be easier for a real, live
PE
human to read and reply.
3