Cognitive Confidence in the Age of AI
Why Thinking Well Might Be Our Most Radical Innovation
We often think of innovation as something shiny and new. But what if the most radical innovation is how we think? In this piece, I explore why AI literacy in the built environment isn’t just about learning what tools exist, it’s about building the cognitive confidence to use them well.
AI is already here. From the maps we follow to the music we stream, algorithms shape more than just our habits. They shape our sense of relevance, urgency, and value. We trust them quietly, to guide our steps, surface the right email, summarise a noisy meeting. But in professional spaces, especially in the built environment, that trust is far more tentative.
AI still feels like a specialist domain. A future add-on. A resource for someone else. Not something to be built into the daily fabric of design, delivery, and decision-making. Yet the tools are already here, embedded in common workflows. CDEs now flag model changes automatically. Classification tools clean and tag file metadata. Automated clash detection helps design teams focus, not just find errors.
The issue isn’t availability. It’s comprehension, comfort, and above all, critical thinking.
Critical thinking is cognitive infrastructure. It underpins our ability to reason, to govern, to ask better questions of the systems we use. In a time of automation, thinking becomes more important, not less. We cannot afford to treat AI as a replacement for human judgement. It must be shaped by it, guided by it, and held accountable to it.
Rather than offering generic AI training, we need to design literacy for specific roles and tasks. A few starting points:
Project Managers: Use predictive tools to spot programme slippage based on patterns across historic projects.
Document Controllers: Leverage version tracking in CDEs to catch subtle changes in drawings and models.
Design Managers: Filter and group clashes automatically so teams can focus on resolving, not reacting.
BIM Coordinators: Use validation tools to check model completeness and catch inconsistencies early.
Facilities Leads: Apply predictive maintenance models that flag issues before they escalate.
Strategy Teams: Run simulations using generative AI to test planning scenarios or stakeholder impact.
These aren’t future dreams. Microsoft Copilot is already reshaping knowledge work. ChatGPT is quietly embedded in admin-heavy tasks. Google’s AI literacy tools are freely available to anyone.
But tools alone do not create transformation. People do. And adoption requires more than digital access, it requires cognitive inclusion.
Many professionals are willing to try but don’t feel safe doing so. Trust is fragile, especially for those already marginalised by systems that rarely speak their language or design with their minds in mind. Neurodivergent professionals, for instance, may find that AI tools reduce decision fatigue, but only when introduced with care, not surveillance.
So how do we build trust and confidence in a way that feels real?
Upskill from within: Train team champions who understand both the tool and the task.
Start small and safe: Opt-in pilots lower the barrier to entry.
Design for daily friction: Not future fantasies.
Ask vendors for help: Many will co-create onboarding with you.
Acknowledge hesitation: That’s not a barrier. It’s where critical thinking begins.
AI will not replace us. But it will reflect us. And the quality of our questions will shape the quality of its impact.
The real innovation is not in what these tools can do. It’s in what we choose to do with them, how we shape them, and how we teach others to think alongside them.
So yes, teach the tools. But more than that, teach the thinking that keeps them honest.
—
References and Resources:
Google AI Literacy Program: https://grow.google/intl/uk/certificates/ai-essentials/
Microsoft Copilot: https://copilot.microsoft.com/
OpenAI ChatGPT: https://chat.openai.com/
AI Readiness Report – Nesta, 2023
Neurodiversity and the Future of Work – CIPD, 2024
Digital Twins and AI Ethics – BSI Whitepaper, 2023