… and more from this edition of the Weekly.
Counting and numbers as concepts strike me as different from words and language. This became obvious to me when I saw my toddler learning how to speak and count. We may assign words and language to numbers, but the abstract ideas of counting and arithmetic do not need language. Math needs symbols more than language or words1. And those words need not be discrete, like “one”, “two” or “ten”. Instead, you can a string a series of 5 dots (…..), and use it represent a count of five, or ascribe each dot with a count of 10, and have those five dots represent a count of fifty. That is not how human communication works, but it is a fairly effective way towards mechanizing mathematics.
Indeed, computers, calculators, and programming languages exploit that basic idea of using symbols for representation to be really good at rudimentary ideas in math, such as counting and arithmetic. This is precisely how computers are able to exploit a sequence of 1’s and 0’s to store, transform and transit data.
So, when large language models burst onto the scene in 2022, something interesting happened: we all expected these computational models of language to be good at math, because they were, well … computational.
But, it is not entirely surprising that these language models cannot reliably and correctly count the number of Rs in the word strawberry2, even as they might spell the word correctly, or even conjure prose and poetry about them. LLMs are so good at language that they might mimic an understanding of numbers. Ask ChatGPT or Copilot to write an essay about Zero, and they will not disappoint. But at their core, they are fundamentally models of language and expecting them to be reliable at counting is fool’s gold: you will get glimmers of success, but they will fail eventually.
This inability to deal with abstract symbols or concepts does not just materialize with numbers, but also with software engineering components. LLMs are not known for their ability to understand the moving bits of a system code and architecture blocks in a complex software system. They will do their best in guessing what I need to code as a programmer, but it’s mostly hit or miss. A lot of code that humans write as programmers builds on top of existing code – we call into different parts of what is already available to us as programmers. However, when using an AI assistant to write code, I have to spend a lot of time figuring out if the code generated by the AI is using actual bits of what already exists, or if it hallucinated the existence of non-existent code components.
It is a core limitation of these systems: these new-age systems work well within their guardrails and environments, but breakdown spectacularly when used outside of those guardrails. And our ability to gain the most from them lies in understanding those limitations, those guardrails and those environments.
Life is for living
Instead of rushing through life, I find myself standing still more than I used to. It has allowed me to notice life around me. And when not intensely private, I capture it with my camera.
Moonshots
Grabbing shots of the moon one late afternoon as it danced behind white clouds on a stage set by a bright blue sky.
Bird of Paradise
Captured this with my iPhone. Discovered news ways to use the iPhone camera app when grabbing this pic.

Fascinating me: Alexa+
The latest product unveiled in this AI-fueled product cycle is Amazon’s Alexa+. For me as a consumer, this is the first AI assistant/offering where I do not have to cough up an extra $20 a month (I am looking at you Gemini). Instead it is bundled into the cost for Amazon’s Prime subscription — something I already have.
Those price dynamics made me realize something else: Amazon Prime is a rare recurring subscription for goods in the real world3. The fact that Amazon feels comfortable in bundling in an AI offering into it, coupled with Amazon Prime’s tangible relationship with the real world, speaks to Prime’s enduring value proposition to everyday users4.
It still remains to be seen if it actually works. I will wait patiently as Amazon finishes its rollout to all Prime users (I am not spending extra on an Alexa device to get ahead in the Alexa+ queue).
Footnotes
- There is a fair argument to be made that language is essentially a bag of symbols. But I do think that there more to language than that. ↩︎
- LLM-based tools are getting shockingly better at mathematics, and I am not entirely sure if it is the underlying language model that is getting better, or the tools’ ability to interpret math in language, and orchestrate the mathematics portion of a conversation towards a more traditional calculator. ↩︎
- Most other subscriptions I have are for digital services (see Youtube, M365). ↩︎
- … and also perhaps to Amazon’s profit margins on a Prime subscription? ↩︎

Leave a reply to Gunners Shot Cancel reply