I changed my mind on AI. And you probably should too.
When ChatGPT first came out and was blowing up, I wasn’t all that impressed. I tried it a few times and thought it was cool but did not see the value in talking to a chatbot apart from asking random useless questions.
I had come across these GPTs before. They came up during some research I was doing on how security and intelligence agencies were using AI for their operations. A 2020 report by the Royal United Services Institute detailed how AI could be used by such agencies for ‘cognitive automation’ in which AI can help analyse large volumes of data. It mentioned in this context OpenAI’s GPT-2 and how this model could be used for language analytics purposes, including for analysing transcriptions of captured audio. But not much else was written on the matter, and so I never looked into these language models any further at the time.
And then a short time later in 2022 when a wave of generative products came crashing in, including DALL-E 2, Midjourney and of course ChatGPT, I was wholly sceptical. Suddenly there was lots of talk about artificial general intelligence, AI safety, the prospect of AI taking all of our jobs and so on. But my understanding and interest remained fairly limited, and so I interpreted much of this as hype that should not be taken all that seriously.
And this is the attitude I had for a long time, barely using the tools whilst I stood to the side and criticised. I would read about AI but hardly use it myself.
It is the classic move for most people working in tech law and policy - dealing with the object of that to be governed at arms length and with only a minimal understanding of its mechanics. It is like the difference between static and dynamic analysis of an application. Legal professionals tend to be stuck constantly doing the former and therefore largely ignorant to how these AI systems work in real-world environments.
And there is little motivation to see it any other way. Legal experts are relied on for their risk-aversion and being able to see the worst case scenarios so that they can be mitigated against or avoided. Seeing AI this way is not particularly inspirational and certainly does not encourage one to actually want to test out these systems for themselves.
Eventually my passivity passed though and I decided that instead of sitting on the sidelines, I was going to get stuck in.
This is when I started experimenting with AI for tasks that I wanted to either offload or make much easier to do.
Among these experimentations included building an AI workflow for GDPR vendor reviews, a rather tedious and time-consuming task. I was able to work out the appropriate instructions and harnessing, with safeguards for reducing hallucinations whilst also making verification straightforward.
At this point I was no longer just talking about AI from a distance. I was enhancing my understanding through hands-on experience, something most legal professionals would not think to do.
The challenge I have ran into now, however, is not a technical one. Using AI is something I feel far more comfortable with now, and my experimentation with it will no doubt continue.
The challenge I now face is whether my increased use of AI is actually a good thing.




