Using AI That Builds Trust, Not Risk
By Esther Yekeni, Senior Technical Services Engineer at Force24.
I recently attended a tech event that was both inspiring and eye-opening. Amid the buzz, one theme dominated the conversation and that’s how AI is reshaping business, leadership, and growth.
I hear many teams say they use AI, yet few can point to measurable wins as a result. BCG’s 2025 study found only 5% of firms are gaining material value at scale, while 60% see little to none.
It’s clear that AI winners aren’t the ones chasing novelty. They’re the ones doing the work of rewiring workflows and upskilling people, not just prompting generative text. At Force24, we’ve learned that turning AI from a headline into real value means building processes that take root.
Making AI genuinely useful starts with how it supports people, not replaces them. By embedding it into our existing workflows and focusing on measurable value, we’ve found ways to strengthen trust, improve efficiency, and keep our marketing human. We’re a UK-based marketing automation platform built by a hands-on team in Leeds. That mix of human insight and automation is what shapes how we build and use AI.
- Decide where AI belongs, and where it doesn’t
The first step is to get clear on what AI should actually focus on. What’s the real problem, or the part of your process that could run more efficiently? AI works best when it’s tackling repeatable, routine tasks that already have an owner and a way to measure success. That’s how you keep strategy and accountability with your team, not the tech.
For example, tasks that often slow teams down but follow a clear, repeatable pattern include:
- Drafting and structuring Product Requirements Documents, then linking to tickets. Starting from a proven template, then ask AI to tighten the wording or ask questions.
- Email and ticket triage. AI can summarise threads, identify blockers, and propose next steps.
- Marketing, sales, and AM tasks like campaign briefs, client recaps, follow-up emails, or internal updates. AI can take you from a blank page to a structured draft, freeing up time for more strategic work.
Equally important is knowing when AI shouldn’t be used. Any project that relies on confidential or personally identifiable data must be handled with extreme care. Without proper guardrails, there’s a real risk of breaching data security, intellectual property rights, or GDPR obligations, all of which every organisation has a duty to uphold. Enterprise surveys, including McKinsey’s latest State of AI report, show that the most successful adopters build clear governance models and senior oversight into their AI strategy. In short, it’s not just about capability; it’s about control, compliance, and trust.
Once you know where AI can add value, the next step is choosing the right way to deploy it.
- Agents, not just prompts, for long-form tasks
There’s a lot of talk about “agentic AI.” Simply put, it’s AI that can take a goal, break it into smaller tasks, use the right tools to complete them, and review the results before moving on.
It mirrors the kind of work most teams already do manually. Agentic AI offers a way to automate parts of that process while keeping your team front and centre.
Use agents where you can clearly measure time saved or improvements in quality across multiple runs and always stay in the loop for irreversible actions. Finally, track every step carefully so you can tell whether the agent is genuinely saving time or simply moving the work around.
At Force24, we’ve already started putting some of these ideas into practice.
- A Force24 example: AIDA, our AI support agent
We built adopted Intercom’s AI bot and overlaid our learnings, best practice and method to create AIDA, allowing it to handle straightforward, frequently asked customer queries.
Trained on our own support content and articles, it’s overseen by our support team, who step in for edge cases and quality checks. The goal was to give our product experts more time to solve complex issues, reduce response times for simple ones, and improve consistency.
AIDA works because of how it’s scoped and supported. It’s plugged into our knowledge base, which we keep up to date, with regular spot checks and clear handoffs to our agents when needed.
Through the next stages of AIDA, we plan to assess sentiment and then spot gaps in our own knowledgebase. We’ll use this to create a cycle of improvement between AI and the teams.
If you’re developing something similar, start with a well-maintained knowledge base. Our public Support Hub uses the same structure that powers AIDA and that foundation matters far more than the model itself.
But how do we know we are getting the results we want from AIDA?
- ROI that survives the quarter
Getting quick wins from AI is fine, but durable ROI occurs when AI is part of long-running change.
The firms that see lasting value pair AI with process redesign, upskilling, and strong historical data. AI initiatives get abandoned because they never tie back to leadership KPIs. A simple rule is if a project can’t show a change in a named KPI, pause or reshape it.
At Force24, this means that AIDA’s success is tied to response time, resolution rate, and team hours unlocked, and not just “it works!”
- Safety, privacy, and “AI incognito”
Finally, moving quickly with AI and getting ahead of the curve doesn’t mean throwing caution to the wind. You can still protect sensitive business data while improving processes by choosing enterprise tools with documented data protections and using built-in features like temporary or no-history modes for sensitive prompts.
For example:
- OpenAI’s Temporary Chats aren’t used for training and are deleted within 30 days.
- Microsoft 365 Copilot and Google Workspace with Gemini both follow enterprise-grade privacy commitments and ISO/IEC 27001 standards.
- In the UK, GDPR and the upcoming AI Regulation Bill both emphasise accountability, human oversight, and data minimisation when using AI tools.
The National Cyber Security Centre (NCSC) reported a 15% rise in serious cyber incidents in 2024, noting that threat actors are already experimenting with large language models to craft more convincing phishing and social-engineering attacks. Including AI adoption in your wider cyber-resilience strategy isn’t just good practice, it’s essential to staying safe while staying ahead.
Keeping Pace with Progress
As technology evolves, so must our approach to it.
The early excitement around AI is fading. What comes next relies on process, measurement, and trust. If a task is repeatable and measured, AI should be helping you. If it’s strategic or ambiguous, keep ownership with your skilled team and use AI to prepare options, not to decide.
AI is moving fast, and staying informed is no longer optional. The real advantage belongs to those who keep learning, sharing, and showing up at events, in communities, and in conversations that help shape how we use this technology responsibly. We all have a role to play in making sure progress doesn’t just move quickly but moves wisely.
Get in touch
Give us a shout.
Ready to take your marketing up a gear? Give us a call or drop us an email – our UK-based team is on hand to help.
Talk to us