The AI Diffusion Dilemma: What It Means for the Office of 2035
Will your office still matter in 2035—or will AI make it a ghost town? The race is on to find out.
.webp)
‘Will AI follow the decades-long, bumpy rollout of electricity, or will AGI automate most knowledge work by 2027? Experts are fiercely debating, and the answer could determine whether the office towers we build today are essential hubs or expensive relics by 2035’
Commercial Real Estate is really a game for futurists isn't it? Most industries go from input to output in a matters of days, weeks, months, but in real estate when we commit to a new project, or when we buy or sell an asset, we are really betting on how the world will be many years, if not decades hence. We develop all these models explaining, to two decimal places, what our returns will be over the next decade, but we know we’re dressing up educated guesses as scientific fact. All models are wrong, but some are useful etc. Even if we mostly keep quiet about this.
Last week, we took the line that in a world where generic real estate is increasingly challenged by technological shifts, a wise strategy would be to invest in the real estate those shifts create—not in the real estate they leave behind.
Few assets sit more squarely in the blast radius of this debate than the office. If AI augments, the office adapts. If AI replaces, the office shrinks. And fast.
In the AI world we are heading into, the problem is there are experts to support both arguments.
Two Futures of AI: Slow Burn or Flash Fire?
Two major publications have recently come out. One (‘AI as Normal Technology’), by senior, highly respected academics at Princeton University, and one (AI 2027 - ai-2027.com) by a team of senior, highly respected AI researchers.
The Princeton View: AI Will Take Decades
Let’s start with the Princeton paper ("AI as Normal Technology"): In this, Arvind Narayanan and Sayash Kapoor…..
"explain why we think that transformative economic and societal impacts will be slow (on the timescale of decades), making a critical distinction between AI methods, AI applications, and AI adoption, arguing that the three happen at different timescales.”
Whilst they agree that AI is a ‘General Purpose Technology’ and thus of great importance, they justify their timeline of decades because:
"It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion... With past general-purpose technologies such as electricity, computers, and the internet, the respective feedback loops unfolded over several decades, and we should expect the same to happen with AI as well.”
The key arguments for a long, drawn out impact scenario for AI is that:
- Innovation is one thing and diffusion quite another. Getting to broad adoption always takes a long time because reality gets in the way and that flat clear road ahead turns in to a glacial moraine.
- Speed limits are everywhere - safety concerns, especially in high stakes areas, regulatory hurdles, organisation inertia, the need to redesign workflows, and the fact that Generative AI is probabilistic not deterministic; so it just sometimes fails, in unexpected ways.
- Humans will get in the way (a point incidentally that Tyler Cowen elaborated on recently). They just tend to slow things down. Turkeys seldom vote for Christmas, so we can expect endless workflow engineering, a desire for ‘Control’, and the re-engineering of work that involves ‘humans in the loop’, sometimes where they absolutely will be required, but often for more spurious, dubious reasons.
Under this scenario knowledge work will likely be augmented by AI, which will mean offices change their exact purpose and internal form factors, but the fundamental need for human workers, gathering, collaborating and managing processes (include of course AI itself) will persist and evolve relatively slowly. So mass obsolescence gets kicked a long way down the road.
Current support for this view is in the old saw that most enterprises are only now rolling out technology that was ‘hot’ 10-15 years ago, whilst startups have got long bored by this and moved on, and are developing the mainstream tech of 10-15 years hence.
So all is OK: the largest commercial real estate asset class is alive, kicking and has a healthy future.
All things
#SpaceasaService
Exploring how AI and technology are reshaping real estate and cities to serve the future of work, rest, and play.

#GenerativeAIforRealEstatePeople — Cohort 10 Starts 12 May.
First live session: Friday 16 May
The leading AI course for ambitious real estate professionals ready to transform how they work, create value, and lead innovation.
Built for:
- Heads of Innovation & Digital Leads driving strategy and transformation
- CRE Executives focused on operational efficiency and tenant experience
- PropTech Founders & Product Leaders integrating LLMs into platforms
- Asset Managers & Investment Analysts seeking smarter underwriting and insight
- Workplace & Space-as-a-Service Operators building AI-augmented journeys
Legal, Compliance & Risk Officers navigating AI regulation and automation

What you’ll get:
- 20+ real-world case studies of companies deploying GenAI across the built world
- Deep understanding of where AI is reshaping assets, jobs and cities
- Hands-on experience with ChatGPT, Claude, Gemini, Perplexity, Midjourney & more
- 20+ prompt frameworks for real estate-specific workflows
- Guidance on identifying use cases, launching pilots, and scaling adoption
Master AI fluency. Futureproof your role. Drive real innovation.'
The AI 2027 View: Cognitive Labour is Going Exponential
Ah, but…
If Princeton sees AI as a slow-burning revolution, the AI 2027 authors see it as a flash fire—one that could consume entire categories of cognitive work in just a few years.
Their perspective, laid out on ai-2027.com, is starkly different. They argue that:
"AGI is defined as AI capable of performing the vast majority of human knowledge work."
"Recent breakthroughs indicate that the timeline for AGI could be significantly shorter than previously anticipated, measured in years, not decades."
"The arrival of AGI by 2027 would represent an unprecedented transformation, automating most cognitive tasks and fundamentally reshaping the economy and society."
"Understanding this timeline is crucial for individuals, organizations, and governments to prepare for the profound changes ahead.”
And yes, it might be different this time:
"Unlike previous general-purpose technologies, AGI's ability to automate cognitive labor itself could lead to recursive self-improvement and an exponential acceleration of progress.”
In a nutshell the authors predict imminent, revolutionary change driven by rapid capability gains in cognitive automation. This will be driven by what is known as ‘recursive self improvement’ (RSI) which is essentially where a human is no longer needed to help a computer learn. They start to learn by playing against themselves. It is why, a year after Google Deepmind’s AlphaGo programme famously beat the world’s best player of Go, Lee Seedol, its successor (which incorporated RSI), Alpha Zero, beat it 100-0 after just a few hours of training.
Once RSI kicks in the speed of improvement moves to an entirely different level.
At this stage, one refers back to the academics argument and thinks, yes but …. all those constraints. Do they disappear?
The Office in the Blast Radius
To which the ‘2027’ authors respond with:
- Safety/Reliability? Sorted, because the AI will debug, test and validate itself far faster and more thoroughly than any human can, and will rapidly overcome these obstacles.
- Integration/Workflow Redesign: Same again, the AI will be able to design optimal, new workflows, and then create compelling user interfaces that drastically reduce the normal friction of adoption. The new systems will simply be so much better, so quickly, that rolling them out will be nowhere near as painful as it has historically been.
- Learning Real-world Nuance: And again, by being such a fast learner, and so easily plugged in to all the systems a company has, the AI will be able to gain an appreciation of the ‘tacit knowledge’ of any organisation, very quickly. Achieving quality ‘gut-feeling’ is no longer going to take years, decades of experience.
Overall this AI will be able to demonstrate deep domain knowledge and high-value utility at a speed no previous technology could. And whilst the authors explicitly acknowledge the arguments put forward by the Princeton academics they point to the key differentiator, this time, is the nature of the technology itself.
For instance they emphasise that we are talking about the automation of cognition. The AI will be automating cognitive labour, including the development of the technology itself. Electricity didn't design better power plants; the early internet didn't autonomously code better network protocols. AI, they argue, can do this.
They also discuss the generality of this technology in that LLMs (and future AGI) just has so much more utility straight ‘out of the box’. They aren’t reliant on much else needing to happen before they can be run at full power. And this is software, not hardware, so scaling is nigh on infinite, there are no marginal costs and deployment can be in hours, not months or years.
Returning to the fundamental point, once the underlying mechanism (self-improving cognitive automation) is in place, we are genuinely talking about technologies the likes of which we’ve never seen before.
The Planning Dilemma: What to Believe, and When?
So we have two, credible, well-articulate arguments reaching diametrically opposed conclusions. In one, offices are pretty much safe for decades to come, but with the other the assumption would have to be that we’d be needing a bare fraction of the global office stock that exists today.
I think it would be quite easy, because the conclusions sound so ‘out there’, to dismiss the ‘AI 2027’ argument, and comfortably luxuriate in the ‘so slow I don’t really need to think too much’ prognosis of the Princeton academics. And I am equally certain many will.
But, for me, that is the high-risk route to take. Yes, they are all, to varying degrees, talking their book, but whenever you read, or listen to, the senior researchers from all the major AI research labs (not just the admittedly most adamant 2027 authors) they all paint a picture of a technology developing at crazy speed and capabilities arising that amaze them constantly. Everyone is assuming extraordinary things by 2030. And no-one is talking decades off anymore.
The 2030–2035 Danger Zone: A Strategic Red Flag
So the really critical issue is whether the arguments that the ‘normal’ diffusion speed of technology will be accelerated due to the new capabilities delivered via RSI, are valid? I think they are, though my accelerator is less ‘to the metal’ than the ‘2027’ authors. I would say the asset class is safe until 2030. But 2030-2035 is the danger zone.
Even without full-blown AGI, highly capable, specialised AI systems could automate vast swathes of tasks within knowledge jobs, leading to:
- Significant job restructuring even if not mass unemployment.
- Reduced overall headcount needed for certain functions.
- Downward pressure on wages for easily automated cognitive tasks.
- A potential bifurcation – high demand for those who manage/direct AI and perform complex non-routine tasks, lower demand elsewhere.
This feels very reasonable and, I would guess, likely to highly likely.
And that is just up to 2035, a decade hence. Beyond that, being worried about the state of office demand in 2035 is entirely rational and prudent. I think the inertia elastic band will snap by then. In a future newsletter we’ll look closely at how to be ready when it does.
Summary: What 2027 vs 2035 Might Look Like

Strategic Takeaways for the Office Asset Class
Which does not mean NO offices will be required. But I’d sure as hell be concerned about owning the ‘right’ offices. What ‘right’ means we have covered before and will cover again.
The bottom line though appears to be that offices as an asset class are getting riskier. I do understand the Princeton academic’s arguments, but find it hard to believe the coming decade is going to be as stodgy as they believe. Most pertinently, I think there will be very many slow moving enterprises, maybe a majority, but I also believe that we’re morphing into an age of fast, agile, ultra-productive superteams, and they WILL be utilising all the power of all these new tools asap.
OVER TO YOU: Long or Short on the Office?
Offices 2030–2035: Are you betting on augmentation or automation
On inertia or exponentiality?
Because buildings built today will still be around in 2035. The only question is: will they still be needed?
What are you seeing? Let’s map the risk together.
‘AI as Normal Technology’.
All things
#SpaceasaService
Exploring how AI and technology are reshaping real estate and cities to serve the future of work, rest, and play.