top of page
Search

I Wasn’t Trying to Use AI

  • Writer: John Bailey
    John Bailey
  • Jan 29
  • 3 min read

This started as a work problem. It turned into a lesson about how I think, how I work, and what AI is actually useful for. This is not a how-to. It’s simply a record of what happened.


I did not start out trying to use AI.


I was trying to fix something that had been bothering me for a while. Remedial works were being tracked, but not in one place. Some lived in spreadsheets. Some arrived by email. Others appeared in audit forms or were mentioned in passing conversations.

Everyone was doing their job. But when you stepped back, it was hard to say, clearly and confidently, what was still open, what had been resolved, and where the real risks sat. Reporting took time. Explaining things took longer. Too much relied on memory and good intentions.

That was the moment I realised the issue was not effort. It was clarity.


Starting with what I already had

My first instinct was not to look for new software. Experience has taught me that new tools often add complexity if the problem itself is not properly understood.


So I gathered what already existed. Old trackers. Audit lists. Notes. Bits of historic data. It was messy and inconsistent, but it was real.


This is where ChatGPT entered the picture.


I did not ask it to build anything. I treated it as a second brain in the room. I showed it the information and asked straightforward questions:

  • Do any of these look like the same issue written in different ways?

  • Where might there be duplication?

  • What patterns can you see that I might be missing?


The responses were not clever. They were useful. They helped me spot where language was inconsistent, where tasks looked duplicated but actually were not, and where risk was assumed rather than stated.


It did not replace my judgement. It slowed me down just enough to think properly.


Making the work visible

Once I understood the shape of the problem, I started building the tracker.


The rules were simple. Every issue becomes a task. Every task has a location. Every task has a risk level. Every task has a status. And every task belongs to someone.


Nothing fancy. Just structure.


As I built it, I kept checking my thinking. I would explain what I was doing and why, and ask whether it made sense. Sometimes ChatGPT agreed. Sometimes it pushed back. A few times it made me realise I was overcomplicating something that did not need to be complicated.


That back-and-forth mattered. It kept the whole thing grounded in reality.


From spreadsheet to something useful

At some point, the tracker stopped feeling like a spreadsheet and started feeling like a tool.


I could see how many issues were open, where the higher-risk items sat, and what had moved during the month. Reporting no longer felt like an endurance test. Conversations became easier. Instead of explaining why something felt under control, I could show it.


Only then did automation start to make sense. Not because it was exciting, but because it removed friction. Issues logged on a phone could land straight in the tracker. Reports could be produced without re-typing the same information every time.


Throughout that process, ChatGPT stayed quietly in the background, helping me think through workflows, spot gaps, and ask the awkward “what happens if…” questions before they turned into real problems.


What actually changed

The biggest change was confidence.


Not confidence that everything was perfect, it never is, but confidence that we knew where we stood. That matters to clients. It matters to landlords. And it matters internally.


When my line manager looked at what I had built, he was not impressed by the technology. He was interested in the thinking behind it.


That stuck with me. This was not an AI project. It was a facilities problem worked through properly, with AI helping along the way.


What I’ve taken from it

AI is not about replacing people. It is about helping people think more clearly.


Used well, it helps organise messy information, challenge assumptions, and support better decisions. You do not need perfect data or technical expertise. You need a real problem and the willingness to look at it honestly.


Facilities management is full of complexity. That is not a flaw, it is the job. AI works best when it supports that reality, not when it tries to smooth it away.


A final thought

Looking back, the most useful thing I did was invite a second perspective into my thinking.

ChatGPT became that quiet voice in the room asking, “Does this actually make sense?”


As I think about what work looks like in the next chapter, I suspect that is a voice I will keep around.

 
 
 

Comments


bottom of page