The annual cadence of Apple’s money-printing press conferences is a big date on the tech journalism calendar. It might not be exciting any more (as we discussed last year), with a steady stream of leaks removing the chance of big surprises and an increasingly incremental approach to product design ensuring that each year’s release is mostly the same as the previous year’s. But it’s still a big moment for readers, reporters and the industry.
For me, it’s also a personal milestone. I joined the Guardian when the iPhone 5S was announced, and I’ve covered technology here for ten years since then.
The iPhones have changed over that time, obviously. From the slender iPhone 5S, which introduced Touch ID to the line-up, through the death, rebirth and death again of “small” phones, to the introduction of the iPhone X and the £1,000 smartphone, all the way to the present, with the iPhone 15 Pro’s titanium body, hardware-accelerated ray tracing, and built-in espresso machine.
But so too has much else. The job of a technology reporter is meaningfully different from when I started, just as the sector I cover is.
There’s already an app for that
Ten years ago was the dying days of the app boom. In 2009, Apple had launched the iPhone 3G with the tagline, “there’s an app for that”, seizing on the App Store – launched just a few months earlier – as the unique selling point for the platform as a whole. But the real app boom took a few more years to arrive, as smartphone penetration took mobile app development from a fun hobby to a system for printing your own lottery tickets.
With millions of iPhones sold, and a mobile web experience that was still sub-par, it was perfectly possible to slap together a 79p app, sell it to a couple of million people, and make enough money to retire. That didn’t happen that often, perhaps, but it was frequent enough to shape people’s perception of the business.
And apps weren’t just software. You could take a business model that was boring and stale, slap it around an app, and become a tech startup. This was the era of Uber (taxis … with an app), Deliveroo (takeaways … with an app) and Taskrabbit (tradespeople … with an app).
A significant chunk of the job ten years ago was keeping track of the dizzying array of new app launches, spotting interesting ones, and honing in on their stories. We even had an entire blog dedicated to it.
That low hanging fruit has been plucked. I’ll bet you can even spot the difference in your own life: once you knock out games and new apps from big companies, when was the last time you actually installed a new app?
The smartphone era changed the world, and much of the last decade has been dominated by companies desperately trying to work out what comes next. A backwards looking view of history suggested another upheaval was on the horizon: the steady tick-tock of computing from mini to micro to personal computers, to GUIs and the web and then smartphones, suggested that another innovation would shortly reshape the competitive landscape again.
Virtual reality, augmented reality, extended reality; cryptocurrencies, initial coin offerings, blockchain, NFTs and Web3; even 3D printing and self-driving cars were presented as the next ubiquitous tech just hovering on the horizon.
Instead, it seems more likely than ever before that the smartphone era isn’t a phase in computing but the apotheosis of it. Even if Apple’s Vision Pro does finally let virtual reality escape its niche, it seems unlikely that it will do so by usurping the smartphone’s role.
As a journalist, that means the last decade has forced the development of a calculated cynicism. When I started as a technology reporter, excited optimism was a crucial skill: being able to look at early versions of groundbreaking technology and understand its potential was what stopped good reporters from writing off things like the first iPhone (overpriced, no 3G, tied to a single network).
But as promise after promise failed to materialise from across the sector, clinging to that optimism started to be foolish – and, worse, to serve readers poorly. An entire industry exists to explain why half-baked proof of concepts are worth getting excited about; far harder is to spot the elements that may not be improved, the flaws and weaknesses that investors want to distract from, and the pitfalls inherent in rolling unfinished technology out to audiences of millions or billions overnight.
A new horizon
That experience is also why I’m confident that the next decade is going to be different. Large language models, and the broader AI boom however we demarcate it, are, even for cynical me, exciting. The explosion of interest in ChatGPT within days of it hitting the internet means that, whatever other cynicism one might hold about the sector, AI isn’t buoyed up by artificial hype. The excitement and the use of the technology is genuine, and that alone should make it stand out from the crowd.
I’ve started to describe myself as an optimistic pessimist when it comes to AI technology. I don’t think it will achieve a fraction of the promises that we are making for it, and I think that’s good. The world of a decade’s time will, I hope, look much like the world of today, but with more difficult problems solved, and more drudgery eliminated.
The optimism in there is because the technology, today, is already capable of huge things. I’ve used it to generate new recipes, to write letters of complaint and to brainstorm holiday activities. There’s no great need to believe promises of future improvement to see how companies and organisations can learn to use this power.
The pessimism is because I’m still a cynic about those unfounded promises. Yes, there have been great improvements in what AI can do, and there are likely to be more in the future. But I’ve been told that progress is inevitable too many times to believe it.
If you want to read the complete version of the newsletter please subscribe to receive TechScape in your inbox every Tuesday