AI, Governance, and the Human Consequences of Design
Consulting & Advisory
I work with organisations at the intersection of psychology, technology, and public life, helping them think through how AI systems shape human experience once they become part of everyday use. My interest is less in AI as a technical artefact, and more in AI as social and psychological infrastructure — systems that increasingly mediate attention, judgement, emotion, identity, and care.
Much current work on “ethical” or “responsible” AI focuses on principles such as fairness or transparency. While these matter, organisations often struggle to translate them into decisions about real products, real users, and real risks. My work bridges this gap. I start from lived human experience — asking what it feels like to use these systems — and translate psychological impact into practical insight for design, governance, and organisational strategy.
What I do
I help organisations identify and mitigate human and psychological risks in AI products — especially around emotional reliance, mental health impact, and dependency — and translate these risks into concrete design and governance recommendations.
I am often asked to support organisations with:
- Human Risk Assessment for AI Products: Evaluation of use-cases, prompts, UX language, and deployment context to identify psychological harm, dependency, and misuse risks, with practical mitigation recommendations.
- Pre-Launch Human Safety Review: Rapid review of high-risk features or applications before release, producing a go/no-go memo and required changes.
- Advisory & Strategy: Ongoing advisory support for product, policy, and leadership teams on psychologically informed AI governance.
- Additional areas of support:
- identifying ethical risk emerging from use-cases, defaults, and incentives
-
developing psychologically informed governance or internal guidance
Engagements are strategic and advisory in nature. I work with product, design, policy, and leadership teams to support clear thinking about responsibility, risk, and human consequence as AI systems enter everyday life. The emphasis is on foresight, coherence, and accountability rather than box-ticking or reputational management.
I work with technology companies, public bodies, think tanks and policy groups, universities, research institutes, and cultural organisations. Engagements may be project-based or ongoing. Enquiries regarding consulting, advisory, or commissioned work are welcome.
Non-Fiction
The Algorithmic Mind
The Algorithmic Mind (forthcoming) examines why rising anxiety, burnout, and social fragmentation are not personal failings, but designed outcomes of contemporary systems. Drawing on psychology, neuroscience, political economy, and computer science, it explores how algorithmic technologies govern attention, identity, and behaviour — and why individual fixes such as productivity tools or therapy alone cannot address structural harm. The book asks what forms of care, public life, and meaning remain possible in an algorithmic age.
Fiction
Borderless
Borderless (2026) is a literary novel about Charlotte, a British woman adrift in contemporary India, determined to live without borders — of nation, class, race, or desire. Moving through Delhi, Bangalore, Goa, and the Himalayan foothills, she becomes entangled in relationships that expose the uneasy intersections of intimacy, power, and privilege. As she confronts the limits of mobility and freedom, the novel asks what belonging means in a globalised world — and whether borderlessness liberates, or slowly exiles.
Keeping it authentic, fearless and compassionate
Get involved by working with me directly. Alongside receiving one-to-one support, you’ll be contributing to the development of NeuroClear — an emerging, evidence-based clinical platform focused on assessment and support for neurodivergent adults.