Silicon ChipTechno Talk - One step closer to a dystopian abyss? - May 2024 SILICON CHIP
  1. Contents
  2. Subscriptions
  3. Back Issues
  4. Publisher's Letter: Welcome to May!
  5. Feature: Techno Talk - One step closer to a dystopian abyss? by Max the Magnificent
  6. Feature: Net Work by Alan Winstanley
  7. Feature: The Fox Report by Barry Fox
  8. Project: GPS-Disciplined Oscillator by Alan Cashin
  9. Project: Dual RF Amplifier for Signal generators by Charles Kosina
  10. Feature: UVM-30A Module Ultraviolet Light Sensor by Jim Rowe
  11. Project: Songbird by Andrew Woodfifield
  12. Feature: Teach-In 2024 by Mike Tooley
  13. Feature: Max’s Cool Beans by Max the Magnificent
  14. Feature: Audio Out by Jake Rothman
  15. Feature: Circuit Surgery by Ian Bell
  16. PartShop
  17. Market Centre
  18. Back Issues: Peak Test Instruments

This is only a preview of the May 2024 issue of Practical Electronics.

You can view 0 of the 72 pages in the full issue.

Articles in this series:
  • (November 2020)
  • Techno Talk (December 2020)
  • Techno Talk (January 2021)
  • Techno Talk (February 2021)
  • Techno Talk (March 2021)
  • Techno Talk (April 2021)
  • Techno Talk (May 2021)
  • Techno Talk (June 2021)
  • Techno Talk (July 2021)
  • Techno Talk (August 2021)
  • Techno Talk (September 2021)
  • Techno Talk (October 2021)
  • Techno Talk (November 2021)
  • Techno Talk (December 2021)
  • Communing with nature (January 2022)
  • Should we be worried? (February 2022)
  • How resilient is your lifeline? (March 2022)
  • Go eco, get ethical! (April 2022)
  • From nano to bio (May 2022)
  • Positivity follows the gloom (June 2022)
  • Mixed menu (July 2022)
  • Time for a total rethink? (August 2022)
  • What’s in a name? (September 2022)
  • Forget leaves on the line! (October 2022)
  • Giant Boost for Batteries (December 2022)
  • Raudive Voices Revisited (January 2023)
  • A thousand words (February 2023)
  • It’s handover time (March 2023)
  • AI, Robots, Horticulture and Agriculture (April 2023)
  • Prophecy can be perplexing (May 2023)
  • Technology comes in different shapes and sizes (June 2023)
  • AI and robots – what could possibly go wrong? (July 2023)
  • How long until we’re all out of work? (August 2023)
  • We both have truths, are mine the same as yours? (September 2023)
  • Holy Spheres, Batman! (October 2023)
  • Where’s my pneumatic car? (November 2023)
  • Good grief! (December 2023)
  • Cheeky chiplets (January 2024)
  • Cheeky chiplets (February 2024)
  • The Wibbly-Wobbly World of Quantum (March 2024)
  • Techno Talk - Wait! What? Really? (April 2024)
  • Techno Talk - One step closer to a dystopian abyss? (May 2024)
  • Techno Talk - Program that! (June 2024)
  • Techno Talk (July 2024)
  • Techno Talk - That makes so much sense! (August 2024)
  • Techno Talk - I don’t want to be a Norbert... (September 2024)
  • Techno Talk - Sticking the landing (October 2024)
  • Techno Talk (November 2024)
  • Techno Talk (December 2024)
  • Techno Talk (January 2025)
  • Techno Talk (February 2025)
  • Techno Talk (March 2025)
  • Techno Talk (April 2025)
  • Techno Talk (May 2025)
  • Techno Talk (June 2025)
Articles in this series:
  • Win a Microchip Explorer 8 Development Kit (April 2024)
  • Net Work (May 2024)
  • Net Work (June 2024)
  • Net Work (July 2024)
  • Net Work (August 2024)
  • Net Work (September 2024)
  • Net Work (October 2024)
  • Net Work (November 2024)
  • Net Work (December 2024)
  • Net Work (January 2025)
  • Net Work (February 2025)
  • Net Work (March 2025)
  • Net Work (April 2025)
Articles in this series:
  • Teach-In 2024 (April 2024)
  • Teach-In 2024 (May 2024)
  • Teach-In 2024 – Learn electronics with the ESP32 (June 2024)
  • Teach-In 2024 – Learn electronics with the ESP32 (July 2024)
  • Teach-In 2024 – Learn electronics with the ESP32 (August 2024)
  • Teach-In 2024 – Learn electronics with the ESP32 (September 2024)
  • Teach-In 2024 – Learn electronics with the ESP32 (October 2024)
  • Teach-In 2024 – Learn electronics with the ESP32 (November 2024)
Articles in this series:
  • Max’s Cool Beans (April 2024)
  • Max’s Cool Beans (May 2024)
  • Max’s Cool Beans (June 2024)
  • Max’s Cool Beans (July 2024)
  • Max’s Cool Beans (August 2024)
  • Max’s Cool Beans (September 2024)
  • Max’s Cool Beans (October 2024)
  • Max’s Cool Beans (November 2024)
  • Max’s Cool Beans (December 2024)
Articles in this series:
  • Audio Out (January 2024)
  • Audio Out (February 2024)
  • AUDIO OUT (April 2024)
  • Audio Out (May 2024)
  • Audio Out (June 2024)
  • Audio Out (July 2024)
  • Audio Out (August 2024)
  • Audio Out (September 2024)
  • Audio Out (October 2024)
  • Audio Out (March 2025)
  • Audio Out (April 2025)
  • Audio Out (May 2025)
  • Audio Out (June 2025)
Articles in this series:
  • Circuit Surgery (April 2024)
  • STEWART OF READING (April 2024)
  • Circuit Surgery (May 2024)
  • Circuit Surgery (June 2024)
  • Circuit Surgery (July 2024)
  • Circuit Surgery (August 2024)
  • Circuit Surgery (September 2024)
  • Circuit Surgery (October 2024)
  • Circuit Surgery (November 2024)
  • Circuit Surgery (December 2024)
  • Circuit Surgery (January 2025)
  • Circuit Surgery (February 2025)
  • Circuit Surgery (March 2025)
  • Circuit Surgery (April 2025)
  • Circuit Surgery (May 2025)
  • Circuit Surgery (June 2025)
One step closer to a dystopian abyss? Techno Talk Max the Magnificent As always, we live in exciting times. Indeed, times are getting more exciting by the minute. Can you imagine being able to simply look at something, ask a question, and receive a spoken answer from an AI? I just saw such a system in action! I n my previous column (PE, April 2024), we talked about the concept of mixed reality (MR), which encompasses augmented reality (AR), diminished reality (DR), virtual reality (VR) and augmented virtuality (AV). Mixed reality is exciting, but the real game-changer will come when we combine it with artificial intelligence (AI), all boosted by the awesome data bandwidths promised by mmWave 5G and 6G cellular communications. The question is, whether this will be a game-changer for good… or the other sort. Where are we? A large language model (LLM) is an AI model notable for its ability to achieve general-purpose language understanding and generation. The first LLM to impinge on the general public’s collective consciousness was ChatGPT. Created by OpenAI, ChatGPT began to roam wild and free in November 2022, which is only around 18 months ago as I pen these words. This form of Generative AI (GenAI) is now all around us. There are AI-based writing tools (give them a few text prompts and they will write your marketing slogans, product descriptions, brochures… and so on); AI-based presentation-generation tools (give them a few text prompts and they will generate your PowerPoint presentation for you); AI-based speech-to-text transcribers (give them an audio or video file and they will return the written transcript); AI-based content summarisers (give them an audio or video file – or the output from a transcriber – and they will return a summary along with a list of action items); text-to-image generators (I’m currently having a lot of fun with Stable Fusion), and – most recently – a company called DeepMotion announced a text-to-3D-animation tool called SayMotion. In my case, I’m particularly interested in how AI might help with hardware design and software development. As a case in point, shortly before I started to write this column, I whipped up a tiny test program to run on an Arduino Uno. 8 This comprised only 34 lines of code, 20 of which were items like { and }. Out of the 14 lines containing more meaty statements, 11 of them (that’s close to 80%) had bugs, and this was one of my better days! Prior to the introduction of LLMbased assistants called copilots, embedded software developers typically spent 20% of their time thinking about the code they were poised to write, 30% of their time writing the code they’d just thought about, and 50% of their time debugging the code they’d just written. By comparison, 60% of today’s embedded code is automatically generated by GitHub Copilot. This would offer a tremendous performance boost if not for the fact that – since Copilot was trained on opensource sources – 40% of the code it generates has bugs or security vulnerabilities. Fortunately, we have Metabob from Metabob (don’t ask), which is a form of copilot that identifies and addresses any problems introduced by humans and other AIs. Where are we heading? There’s a famous quote: ‘It is difficult to make predictions, especially about the future.’ This quote is so famous that no one knows who said it. It’s been attributed to all sorts of people, from Mark Twain to Niels Bohr to Yogi Berra. Whoever did say this knew what they were talking about. I would never have predicted many of the technologies we enjoy today. Contrariwise, some of the technologies I was looking forward to seeing have failed to materialise (in more ways than one). One of the questions I often ask technologists when I’m interviewing them is, ‘Will we have technology XYZ next year?’ (where XYZ is whatever futuristic technology forms the topic of our conversation). Of course, they always answer ‘No.’ My next question is, ‘Will we have this technology in 100 years’ time?’ To this, they always respond ‘Yes.’ Then I say: ‘So, now we have the endpoints, all we need to do is narrow things down a little.’ I am confident it won’t be long before we are all sporting some form of MR ‘something or other.’ Personally, I think one of the MR interfaces intended for daily usage that will arrive on the scene sooner rather than later will look a bit like a pair of ski goggles, but I’m prepared to be surprised by something else. When people tell me that they would have little use for an AI+MR solution, I think back to the early 2000s when the same folks told me they had no use for phones that could take pictures (‘All I want to do with my phone is make calls’). All I can say is, ‘Look at you now!’ One of the examples I often present is being able to ask my AI+MR combo a question like, ‘What was that book I was reading a few months ago that talked about AI and Ada Lovelace?’ I can envisage the AI responding with the name of the book, while the MR highlights its location on my bookshelf. There are several required ‘building blocks’ that are starting to fall into place. For example, a company called Prophesee makes a teeny tiny eventbased vision sensor that’s only 3mm x 4mm in size and consumes only 3mW of power. Another company called Zinn Labs has mounted these sensors in a pair of glasses frames that are also equipped with an eight-megapixel forward-looking camera, microphones, loudspeakers, and a cellular connection to a cloud-based AI. The camera captures the scene while the sensors track what your eyes are looking at. I’ve seen a demo where the user looks at something like a plant and simply asks a question like, ‘Can I grow this plant indoors?’ The AI immediately responds, ‘Yes, but it requires bright light, so you’ll need to place it near a south-facing window.’ Horns and swords I feel like we are sitting on the horns of a dilemma with a Sword of Damocles hanging over our heads (I never metaphor I didn’t like). I hope we’re heading toward an age of wonder; I fear we’re one step closer to a dystopian abyss. Pass me my dried frog pills. Practical Electronics | May | 2024