We know—it’s a long read. But as AI tools become more embedded in our day-to-day workflows, especially the local agents creeping ever closer to our critical data, we think it’s worth diving into the details. As you can see, this post sits in our Borders Tech Connect resource centre here on Substack, ready for you to revisit whenever you need. So, whether you read it in one sitting or dip in when the need arises, we hope it helps frame the real cybersecurity questions AI tools are raising, without the hype.
A Brief Overview of AI-Driven Tools
ChatGPT and other Generative AI agents.
Overview of use cases (coding assistants, content generation, automation).
Contextualising AI Tools within Existing Cybersecurity Practices
Similarities with established cybersecurity principles (data protection, privacy, authentication).
Differences: dynamic, conversational, context-aware interactions.
Is this a new thing? Well, yes and no. AI-driven tools like ChatGPT and other generative agents have burst onto the scene, offering a host of practical applications, from coding assistants that help developers debug code in real-time, to automated content creators streamlining your marketing, to workflow automation bots that tackle repetitive tasks. Recently, we’ve seen the emergence of even smarter tools, AI agents that can run locally, securely connecting into your own workflows, data, and toolkits. Think ChatGPT agents integrating directly with your APIs, local file systems, or databases, offering powerful assistance right where your sensitive data lives. Yet, as transformative as these tools are, many of the cybersecurity principles we know and trust—such as data protection, privacy, and robust authentication, still apply. The key difference lies in the dynamic, conversational nature of these AI tools: they’re context-aware and interactive, engaging users in real-time dialogues that open new avenues for creativity, productivity, and collaboration, but also raise fresh cybersecurity questions about managing information securely in a constantly evolving conversation.
Local agents, those powerful AI tools running right within your own environment, are still quite fresh, but they’re not totally unexplored territory. There are emerging platforms and frameworks designed exactly to handle local AI agents securely, such as LangChain, AutoGPT, Microsoft’s Semantic Kernel, and private deployments via Azure OpenAI or AWS Bedrock. Tools like these help you plug AI directly into your local workflows, data sources, and applications while keeping security and compliance firmly in focus. But it’s early days yet—this space is evolving fast, and best practices are still being refined. So, while you’re not entirely on your own, expect to be part of a community figuring out the cybersecurity implications as these innovative tools become mainstream.
Comparing AI Tools to Traditional Cybersecurity Processes
Similarities
Data governance remains crucial (compliance, data protection regulations like GDPR, ISO27001).
Importance of access controls, authentication, logging, and audit trails.
Differences
Real-time generative interactions vs static pre-programmed responses.
Increased complexity of auditing conversational agents.
Potential for unintended leaks due to conversational nature.
When we talk about cybersecurity for local AI agents, Excel makes a handy analogy. Think about it: we’ve often been a little too comfortable using Excel as our go-to for almost everything—from customer data to sensitive business processes, without much oversight. While Excel’s flexibility feels great, it’s also allowed a bit of loose thinking, leading to unsecured sheets, sloppy access controls, and unclear audit trails. Now, imagine handing that same Excel file to a local AI agent that has full, dynamic control over your data. Suddenly, the stakes rise considerably: that casual approach we’ve taken in the past becomes a genuine risk. The informal, conversational style of local AI interactions might encourage even more careless sharing of sensitive details, potentially multiplying security concerns. Just as Excel itself isn’t inherently unsafe, it’s how we use it. Local AI agents aren’t dangerous by default, but they demand clearer governance, tighter access control, and smarter data handling practices than we’ve traditionally applied to our beloved spreadsheets.
Key Cybersecurity Concerns when Implementing AI Tools
Data Leakage and Privacy
Risk of accidental exposure of sensitive or proprietary information.
Case examples or anecdotes (Samsung’s ChatGPT incident as a relatable case).
Integrity and Trustworthiness
Risks of misinformation or hallucinations.
Ensuring outputs align with regulatory and compliance standards.
Access Control and Authentication
Managing user roles, API keys, and permissions.
Ensuring least-privilege principle is upheld when integrating AI.
When rolling out local AI agents, there are three key cybersecurity concerns that we really need to keep front and centre.
First up—Data Leakage and Privacy: Just because your AI agent runs locally doesn’t mean your data is magically secure. There’s a real risk of accidentally exposing sensitive or proprietary information through casual, conversational interactions. Think of what happened with Samsung when employees shared confidential code snippets with ChatGPT. Now imagine something similar happening locally—your AI agent accessing sensitive files, customer details, or financial records without proper oversight.
Then there’s Integrity and Trustworthiness: AI agents, as helpful as they are, sometimes confidently deliver misinformation (also known as “hallucinations”). The risk increases when local agents dynamically interpret and generate outputs from your internal data. Ensuring accuracy and compliance with regulatory standards becomes even trickier, especially if the agent is integrated into critical business processes or customer-facing interactions.
And finally, Access Control and Authentication: Local AI agents need clear rules about who can use them, how they’re authenticated, and exactly what permissions they have. Properly managing user roles, API keys, and permissions is critical. It’s essential to uphold the principle of least privilege—giving your AI just enough access to perform its tasks, without unintentionally opening your entire internal environment to unexpected vulnerabilities.
In short, bringing AI agents into your local workflows is powerful—but that power comes with responsibilities around security, accuracy, and controlled access.
Overall Thinking for Implementing AI Tools
Establish Clear Policies
Create guidelines for team interactions with AI agents.
Define acceptable use clearly (what can and cannot be shared).
Consider Technical Mitigations
Implement filters or monitoring tools to detect sensitive information disclosures.
Explore API-level controls or middleware to protect information.
Awareness and Training
Educate users on risks, best practices, and incident responses.
Ah, policies—the fun never stops! But here’s the truth: if you’re bringing local AI agents into your workflows and haven’t previously had to deal with detailed policies (perhaps you haven’t gone through ISO certification), then now’s definitely the time to embrace them. Clear, practical guidelines are essential: set out exactly how your team can (and can’t) interact with these agents. Make it crystal-clear what information is off-limits and what’s acceptable for sharing.
Alongside policies, technical mitigations are your friends. Implement filters, monitoring tools, or middleware to keep an eye on sensitive information and flag when someone accidentally—or intentionally—shares something they shouldn’t. API-level controls can also prevent your AI from casually wandering into places it shouldn’t.
And let’s not underestimate the human factor. Awareness and training sessions, while not always thrilling, are crucial. Everyone who interacts with local AI agents needs to understand the risks, grasp the basics of secure interaction, and know exactly how to respond if something goes wrong.
The bottom line? If detailed policies haven’t been your thing before, local AI agents mean they’re about to be. Embrace the paperwork, learn to love it—or at least tolerate it—and keep your information safe.
Does Telling ChatGPT Personal/Secure Stuff Create Issues?
Potential Issues
Sharing confidential information with ChatGPT risks data exposure or leaks.
Regulatory compliance implications (GDPR, ISO27001, industry-specific regulations).
Mitigating Strategies
Clearly define and enforce guidelines about what data should be shared.
Explore private instances or enterprise solutions that offer stronger security assurances (e.g., Azure OpenAI Service, AWS Bedrock, private deployments).
We use GPT-based tools so frequently now, they’ve become second nature—but does casually sharing personal or secure information create issues? Unfortunately, yes, it can. When you’re chatting away, it feels safe and conversational, but there’s always the risk of unintentionally exposing sensitive details. That casual request to fix some buggy code, or tweak an internal document, might inadvertently reveal confidential client data or proprietary information.
From a regulatory standpoint, this can have serious implications—think GDPR, ISO27001 compliance, or sector-specific requirements. Sharing sensitive data with public GPT services risks potential violations of data protection rules, not to mention damage to your reputation.
So, how do we stay safe? Firstly, define clear boundaries: establish practical, enforceable guidelines about what’s acceptable to share with AI. Educate your team and make sure everyone understands these rules. And secondly, consider safer setups: look at enterprise-grade services like Azure OpenAI, AWS Bedrock, or other private deployments designed to protect your data while still harnessing the power of conversational AI.
In short, be aware, be deliberate, and choose secure setups where it matters. A little caution now could save big headaches later.
How Do Chat Engines Handle and Store User Information?
General Policy (OpenAI Example)
By default, ChatGPT may retain chat history for training purposes unless explicitly opted out.
Users can disable chat history, restricting OpenAI from using conversation data for training.
Enterprise-level Considerations
API usage generally does not involve data retention beyond initial processing.
Confirm vendor-specific data retention and deletion policies explicitly.
Recommendations
Review Data Processing Agreements (DPAs) and privacy policies.
Consider enterprise options or private deployments for sensitive business processes.
Ever wondered what happens behind the scenes with all the info we happily share with AI tools like ChatGPT? Well, let’s break it down clearly. By default, public services like ChatGPT might retain your chat history and interactions for training purposes—unless you explicitly opt out. You do have control, though: disabling chat history prevents your data from being used to train or improve the model, giving you more privacy.
When it comes to enterprise-level usage, such as via APIs, vendors typically process your requests without holding onto your data beyond what’s necessary for immediate operations. But don’t assume—that’s something you should always verify. It’s crucial to check vendor-specific policies closely, confirming exactly how long they keep data, where it’s stored, and how it can be deleted.
Our recommendation? Be proactive—review your Data Processing Agreements (DPAs) carefully, understand the privacy terms clearly, and consider enterprise-grade solutions or private deployments for handling sensitive or business-critical interactions. After all, staying informed on how your data is used and stored is one of the smartest moves you can make in the AI age.
Conclusion and Best Practices
Clearly establish guidelines around AI tool usage.
Assess and balance innovation benefits against cybersecurity risks.
Prioritise education, policy enforcement, and technical measures.
Wrapping it all up: AI tools—especially local agents—offer massive opportunities, but they don’t come without risk.**
The best way forward is clear-eyed and pragmatic. First, establish solid, workable guidelines around how your team should use AI—what’s in bounds, what’s out of bounds, and where caution is needed. These don’t need to be 50-page policy documents (unless you want them to be), but they do need to be understood and followed.
Second, embrace the balancing act: yes, AI agents can save time, automate tedium, and unlock new capabilities—but that has to be weighed against very real cybersecurity risks. Don’t let excitement override your responsibility to keep systems and data safe.
And finally, keep education, policy enforcement, and technical safeguards front and centre. Train your team. Put controls in place. Monitor and adjust. This isn’t about fear—it’s about making smart decisions as the tools evolve.
In short: use the tools, but don’t lose your head.