Building Secure Systems: Tools from Debbie Baird’s Workshop
Some invaluable checklists and toolkits
As part of her workshop on Attack Surfaces and Social Engineering, Debbie Baird from CodeBase didn’t just give a great talk — she left us with a toolkit. Below you’ll find a set of practical checklists and security principles designed for people building in low-code, no-code, or integrated environments. Whether you’re running Airtable automations, Zapier workflows, or designing custom tools with AI agents, these resources are essential reading.
They’re all here in the post so you can reference them easily, bookmark for later, or copy them into your own security documentation.
Big thanks again to Debbie for being so generous with her time, knowledge, and materials.
Included below:
Regular AI vs AI Agents — What’s the difference, and what does that mean for security?
Secure by Design Principles — Eleven foundational guidelines to bake security into every system.
Secure Workflow Design Checklist — Best practices for building safe, auditable automations.
Zapier Security Checklist — Specific actions to secure one of the most-used no-code tools.
Workflow Stress Testing Checklist — How to test the resilience and limits of your systems.
Glossary of Terms — An explanation of key terms used across the resources.
Regular AI vs. AI Agents
Regular AI
Definition: A single-purpose model or system that performs a specific, predefined task when prompted — no autonomy, no long-term memory, no self-directed actions.
Characteristics:
Reactive: Only acts when given an explicit input.
Stateless: Doesn’t remember past interactions unless designed to.
Narrow scope: Good at one category of task (e.g., classify emails, recommend products, translate text).
No environment interaction: Processes data, but doesn’t "do" anything in the real world unless another system uses its output.
Examples:
ChatGPT answering a question once typed in.
An image recognition AI that labels photos.
A fraud detection algorithm running on transactions.
AI Agents
Definition: An AI system with a degree of autonomy can take goals, make decisions, and perform multi-step actions by interacting with digital environments or tools without constant human direction.
Characteristics:
Proactive: Can initiate actions based on goals or triggers.
Goal-oriented: Works toward a defined objective, sometimes adapting the plan along the way.
Tool-using: Can connect to APIs, databases, or systems to take actions.
Environment-aware: Monitors changes and responds accordingly.
Memory & state: Can store and use past information to make better decisions.
Examples:
An AI that monitors security logs and opens/updates tickets when suspicious activity is detected.
An automated customer support AI that answers emails, updates CRM records, and escalates critical cases.
A DevOps AI that deploys code, runs tests, and rolls back if errors occur.
Key Differences Table
Security Implications
Regular AI: Lower risk — can only output what you feed it in the moment.
AI Agents: Higher risk — because they can act on systems, make changes, and chain multiple actions, they need strong access controls, monitoring, and fail-safes.
Apply least privilege to API keys and connected systems.
Include human-in-the-loop for high-impact actions.
Secure by Design Principles
Core guidelines for building security into systems from the start
Principle of Least Privilege (PoLP)
• Give users, apps, and processes only the access they need
• Avoid 'just in case' permissions
• Review and adjust privileges regularlyDefence in Depth
• Implement multiple layers of security controls
• Combine authentication, segmentation, and encryption
• Ensure backup protections exist if one control failsFail Secure, Not Fail Open
• Default to a secure state when errors occur
• Deny access if authentication or security checks failSecure Defaults
• Enable strong security settings by default
• Turn on MFA and strong password policies out of the box
• Enable logging and monitoring automaticallyMinimise Attack Surface
• Disable unused features, APIs, and ports
• Remove default admin accounts
• Uninstall unnecessary componentsThreat Modelling from Day One
• Identify assets, threats, and attack paths before building
• Update the threat model as the system evolvesSecure the Supply Chain
• Verify the integrity of third-party code and dependencies
• Use signed packages and libraries
• Scan dependencies for vulnerabilitiesInput Validation & Output Encoding
• Validate all inputs for type, size, format, and whitelist values
• Sanitise or encode outputs to prevent injection attacksSecurity as a Continuous Process
• Monitor, patch, and test systems regularly
• Integrate automated security checks into CI/CD pipelinesPrivacy by Design
• Collect only the data necessary for the task
• Apply minimisation, anonymisation, and retention policiesHuman-Centric Security
• Design to reduce the likelihood of user mistakes
• Provide clear, actionable security prompts
• Include training and awareness in rollout plans
Secure Workflow Design Checklist
Best practices for safe automation and integration
Apply the Principle of Least Privilege
• Grant only the permissions required for each app/service
• Avoid connecting automations with full admin accounts unless necessary
• Use dedicated service accounts for long-running automationsControl the Data Flow
• Map where sensitive data enters, travels, and is stored
• Avoid sending PII or confidential data to services that do not require it
• Mask or encrypt sensitive values before passing them between stepsValidate Inputs
• Check incoming data format and source authenticity
• Use filters or conditional logic to block suspicious inputs
• Verify webhook or API signatures (e.g., HMAC validation)Limit External Triggers
• Avoid 'catch-all' webhooks that accept data from unknown sources
• Restrict trigger sources to authenticated systems
• Secure scheduled workflows to prevent abuseSegment & Isolate Workflows
• Break large workflows into smaller, modular automations
• Separate high-risk and low-risk workflows into different accounts/environments
• Do not mix production and testing in the same automation environmentLogging & Monitoring
• Log all critical actions performed by workflows
• Send logs to a central security dashboard or SIEM
• Monitor for unusual run frequency or timingSecure API Keys & Credentials
• Store credentials in secure vaults, not hard-coded in workflows
• Rotate API keys and OAuth tokens regularly
• Remove unused credentials promptlyHuman-in-the-Loop for Risky Actions
• Require manual approval for high-impact actions (e.g., deleting data, changing permissions)
• Add secondary verification for financial or large-scale data changesPlan for Failure & Abuse
• Set error handling to prevent infinite retries
• Trigger alerts for unexpected outputs or high-volume runs
• Maintain backups for critical data touched by workflowsReview Regularly
• Perform quarterly workflow security audits
• Remove unused automations and integrations
• Update documentation and security controls
Zapier Security Checklist
For safe automation and data handling
Account & Access Security
• Enable Multi-Factor Authentication (MFA)
• Use a strong, unique password stored in a password manager
• Separate accounts for testing and production
• Limit user access and assign least privilege roles
App Connection Hygiene
• Review connected apps regularly and remove unused integrations
• Use service accounts instead of personal accounts where possible
• Re-authenticate app connections after password changes
• Authorise only the minimal permissions needed for your workflows
Data Privacy & Compliance
• Know what data is moving through Zapier and where it is stored
• Mask or redact sensitive fields before sending to third-party apps
• Verify compliance with GDPR, HIPAA, or other regulations
• Disable Zapier task history storage for sensitive workflows
Workflow Design to Reduce Risk
• Validate incoming data before processing
• Avoid 'catch-all' triggers that collect unnecessary data
• Break large workflows into smaller, isolated automations
• Log actions into a secure audit trail
Monitoring & Alerts
• Enable email notifications for errors or new logins
• Set up alerts for abnormal automation activity
• Review execution history regularly for suspicious actions
AI-Specific Risk Awareness
• Treat AI outputs as untrusted input — sanitise before use
• Do not send sensitive PII or confidential data to AI services
• Consider a human review step for high-impact AI-driven automations
Workflow Stress Testing Checklist
Step-by-step guide for load, fault, and security testing of workflows
Define the Normal Baseline
• Measure average workflow volume, trigger frequency, and execution time
• Record typical success/failure rates
• Capture baseline from real logs before startingBuild a Controlled Test Environment
• Clone workflow into a sandbox/test environment
• Use mock or anonymised data
• Point integrations to staging endpoints, not productionTest for Volume & Load
• Simulate high trigger frequency (e.g., 10x normal)
• Push large payloads (files, API responses, bulk data)
• Chain multiple workflows to test cascading effectsTest for Concurrency
• Trigger the same workflow in parallel from multiple sources
• Check for race conditions and duplicate processing
• Confirm unique ID checks or locking mechanismsTest for Fault Tolerance
• Simulate connection drops mid-run
• Force API errors or service timeouts
• Check retry logic, error logging, and alerting
• Ensure data remains in a recoverable stateTest for Security Under Load
• Send malformed or unexpected data at high speed
• Verify authentication and access controls under stress
• Ensure error messages do not expose sensitive infoMonitor During the Test
• Track CPU, memory, and network usage (if self-hosted)
• Monitor API rate limits and quotas
• Watch queue lengths, response times, and error ratesReview & Fix
• Document bottlenecks, slowdowns, and failures
• Add throttling, caching, or batching where needed
• Improve error handling and retries
• Implement circuit breakers to stop runaway loopsRepeat Periodically
• Run a mini stress test before major workflow changes
• Conduct a full-scale test at least twice per year
Glossary of Terms
Least Privilege (PoLP)
Only give users or systems the minimum access they need to perform their task — nothing more. It limits damage if something goes wrong.
Attack Surface
All the possible points where someone could try to break into or exploit your system — from login pages to exposed APIs.
Social Engineering
Tricking people into giving up information or access, often by pretending to be someone trustworthy (e.g., phishing emails, fake support calls).
AI Agent
Unlike regular AI (which just responds to input), an agent can take action on its own — running tasks, connecting to tools, and making decisions.
Catch-all Trigger
A trigger in a workflow that accepts any incoming data — useful, but risky if not tightly controlled.
Concurrency
Multiple processes that are running at the same time. If not managed well, this can lead to errors like double-processing or data collisions.
Webhooks
Automated messages are sent between apps when something happens (e.g., “a new form was submitted”). Powerful but must be secured.
Fail Secure
If something breaks or errors out, it should default to safe mode — no access granted, rather than leaving the door open.
Threat Modelling
Thinking ahead about what could go wrong: What’s valuable? Who might attack? How might they do it?
Human-in-the-Loop
Adding a manual step (like approval) in an automated system to reduce risk — especially before doing something irreversible.