Pattern Recognition
Are we about to see the YouTube-ification of software?
There’s still a great deal of debate and skepticism about where LLMs are taking us. This is natural and not entirely dissimilar to when the web went mainstream thirty years ago, and the same with cloud apps around twenty years ago.
It is also worth reflecting on how long it took those shifts to progress from niche nerd topics to broader awareness and then adoption. Technological inertia is powerful; it takes about a decade after the intellectual arguments are won before we see majority adoption. Even if it were perfect, fully baked, and 100% ready to go today, it would still take several years for AI to be comprehensively adopted and digested by more than a minority of businesses.
So I think it would be a mistake, therefore, to disparage the current state of language model-based AI, as I occasionally hear, on the basis that it doesn’t deliver sustainable economic returns to those who try to deploy it.
I personally suspect we’re about 1% of the way into wherever LLMs are going, and it’s far too early to size it, just as Thomas Watson, the founder of IBM, famously did when he said he thought there was “…a world market for maybe five computers”.
However, my time in software spans three decades plus, which means I’m pathologically incapable of moderating my impulse to map prior shifts to the present one and attempting to proverbially peer around the corner.
While generalised agentic AI business applications are still percolating, and AI investors huff and puff themselves into a bubble — what if it’s not a bubble? — much of what I’m reading and hearing points to AI-assisted coding having turned a corner in the second half of 2025. Despite AI agents making the headlines for mostly the wrong reasons, a quiet revolution is happening in the IDE where the sentiment among developers is shifting rapidly:
“I’ve never felt this much behind as a programmer... I have a sense that I could be 10X more powerful if I just properly string together what has become available over the last ~year and a failure to claim the boost feels decidedly like skill issue.” — Andrej Karpathy
“The quality of work produced by Claude... regularly blows my mind. I cannot and do not want to go back to that infuriating, head-scratching, hair-pulling, sloth-like programmer productivity of, er, 2 years ago.” — Olly Heady
“I believe the AI-refusers regrettably have a lot invested in the status quo...They all tell themselves that the AI has yet to prove that it’s better than they are at performing X, Y, or Z, and therefore, it’s not ready yet. But from where I’m sitting, they’re the ones who aren’t ready.” — Steve Yegge
The implications are profound. Collapsing production costs and simultaneously leveraging productivity in software development will spark a number of significant changes, not least a likely explosion in the number of new products coming to market.
You’re gonna need a bigger metaphor
The Cambrian Explosion was occasionally employed as a somewhat grand metaphor by VCs to describe what happened to software sometime around 2010, when the practice of building desktop software was finally obsoleted by the general availability of reliable, low-cost cloud services. Building software for the cloud fundamentally altered and flattened the prevailing cost of production, distribution, and operations in SaaS model businesses, quickly spawning thousands of new SaaS startups.
As great as the shift to the cloud was, it’s possible, if not probable, that AI-assisted coding will bring about a much bigger shift.
Anish Acharya, a General Partner at Andreessen Horowitz, makes a compelling case in his article “The Future of the Web is the History of YouTube,” arguing that the structure of the software industry could be about to undergo a huge metamorphosis, akin to the impact YouTube has had on video content creation and distribution.
In barely 20 years, YouTube has succeeded in all but eliminating the cost of producing and distributing video content (and monetising it), resulting in a colossal number of mainstream and niche-interest channels and content producers, with a staggering 700,000 hours of new content being uploaded every day.
“We may see hyper-personalized applications on the web, for much smaller audiences. This is tremendously liberating: software no longer needs to be practical... It just needs to have a good idea behind it, and a couple of people who understand its value.” — Anish Acharya
Working backwards, YouTube’s impact on the media industry was more or less predicted in 2008 in Clay Shirky’s book Here Comes Everybody - The Power of Organizing Without Organizations.
Mass Amateurisation: The monopolies of traditional organisations give way to a flood of participants creating content.
Publish Then Filter: Because publishing is cheap, the traditional “gatekeeper” model inverts. Information is published first, and filtered for quality later.
The Power Law: Content volume explodes, but follows an 80/20 distribution where a minority of participants still drive the majority of consumption.
If AI-assisted coding enables software production to jump the tracks onto the same economic pathway followed by video production on YouTube, then we’d better hold on to our hats for the next era of software development.
Mass amateurisation, the democratisation of expertise or unadulterated chaos?
One snag, though, with mass amateurisation is that, unlike business software, niche interest video content doesn’t need to be mission-critical. It might dampen your enjoyment of an evening’s casual entertainment at home if your favourite cat YouTuber skips a week because they’re on vacation, but if we’re soon to be awash with thousands of kitchen table software vendors, where do factors like brand, trust and reliability feature when it comes to hitching your critical business processes to their wagons?
Which brings us back to pattern recognition. Just as skepticism was abundant during the early days of the web and the cloud, overly fixating on the risks of AI-generated code misses the forest for the trees. The collapse of production costs is not a variable; it is a trend line.
Yes, we will be awash in niche, “good enough” software, many of which will go nowhere or fail. Yes, relying on unproven, micro-scale vendors carries risk. But to focus solely on the potential for chaos is to make the same mistake Thomas Watson made with the computer. We are witnessing the democratisation of engineering, where the barrier to entry drops from “millions of dollars” to “a good idea.”
As I said, I believe we may well be only 1% of the way into all this. If the YouTube era taught us anything, it’s that while mass amateurisation brings chaos and noise, it also unleashes creativity on a scale previously impossible. The next ten years won’t just be about who can write code, but who can navigate a world where coding is no longer the bottleneck—and who can filter the signal from the noise.



Oh my lord, it's like when everyone was a DJ, or a producer. And then everyone was a writer... just because you can, it doesn't mean you should. Love this article, Gary.
Love this article. Given the above, what’s your thoughts on how to compete against increasingly fragmented competition in a vertical?
You’ve talked to brand. What’s your thinking around (and arguably factors that all contribute to a brand’s promise to users) service; good taste in design and UX; community; distribution; and other factors that can’t be easily mimicked. What are the moats in this new normal?