The Cyber Solicitor

The Cyber Solicitor

AI Governance

AI procurement from first principles

Being careful about the systems you use

Mahdi Assan's avatar
Mahdi Assan
Nov 28, 2025
∙ Paid
Jamillah Knowles & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

With the AI boom and hype comes many AI systems. Some are built for general-purpose use - household names like ChatGPT and Claude may come to mind. Others are designed for more specific use cases, like RobinAI (for contract review) or Cursor AI (for writing code).

The plethora of systems available provide plenty of opportunities for organisations to innovate. AI can help produce new products or services or improve existing processes for building products or services.

Such opportunity still exists despite organisations needing to resort to the use of systems built by others. It might sometimes be preferable to develop your own system specifically built for your own use case with your own data, giving you a wide range of customisability. However, these systems built by the likes of OpenAI and Anthropic still come with plenty of flexibility. Their models are general-purpose which can be built on top of, fine-tuned or engineered with context and other tools to construct systems for a range of domains.

But with this opportunity comes risk, and these risks are just not limited to those which are legal in nature. AI development is itself an empirical science, whereby assessing behaviour and performance can only really be done by using models and monitoring them post-deployment. Models are often characterised as black-boxes possessing a level of complexity that renders them opaque and difficult to control. This can mean that AI systems risk failing to meet important business and legal requirements, with potentially significant consequences; poor ROI, user complaints and even regulatory intervention and legal action.

And so with these risks come responsibility. As Ethan Mollick notes in his book Co-Intelligence: Living and Working with AI, with AI becoming increasingly capable, “we’ll need to grapple with the awe and excitement of living with increasingly powerful alien co-intelligence.”1

For organisations procuring AI systems, this responsibility comes in the form of being aware of what you buy. In the world of AI, the principle caveat emptor (’buyer beware’) is crucial.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Mahdi Assan · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture