Glossary

AI & Automation Glossary

Plain-English definitions of the vocabulary you need to make sense of AI in 2026. Each entry is self-contained — so you can link, quote, or send just one without needing the rest of the page.

Level key:101IntermediateAdvanced

Start here

If you're just getting into AI

These 34entries cover the vocabulary that shows up in every AI conversation in 2026 — the major platforms, what an LLM actually is, what a prompt does, how automation and AEO fit together. Read them in order if you're new. Bookmark the whole page if you're not.

Anthropic

101
Anthropic is the AI safety company that builds Claude. Founded in 2021 by siblings Dario and Daniela Amodei along with other former OpenAI researchers, Anthropic emphasizes constitutional AI and safety research. Anthropic introduced the Model Context Protocol (MCP) and has major partnerships with AWS and Google Cloud.

ChatGPT

101
ChatGPT is the chatbot interface OpenAI released in late 2022 that kicked off the mainstream generative AI wave. It's powered by the GPT family of models (currently GPT-4 / GPT-4 Turbo / GPT-5). ChatGPT Team and ChatGPT Enterprise add collaboration, admin controls, and data privacy commitments for organizations. Custom GPTs (see below) let users build tailored versions.

Claude

101
Claude is the AI assistant built by Anthropic, the company founded by former OpenAI researchers focused on AI safety. Claude is known for long context windows (200K+ tokens), strong writing quality, and careful reasoning on complex tasks. Claude Projects let teams attach persistent context files; Claude Teams and Enterprise plans add admin controls. Digital Braid defaults to Claude for most production work.LLM setup service

Gemini

101also: Google Gemini, Bard (former name)
Gemini is Google DeepMind's family of AI models (Gemini 1.5, Gemini 2.0, etc.) and also the name of the consumer and business AI assistants that use them. Gemini is deeply integrated with Google Workspace (Docs, Sheets, Gmail) and supports multimodal inputs (text, image, audio, video). Gemini Gems are Google's answer to Custom GPTs.

Google AI Overviews

101also: AI Overviews, SGE
Google AI Overviews (previously the Search Generative Experience) is the AI-generated summary Google shows at the top of many search results pages. The overview synthesizes content from indexed sources, cites a few of them, and often answers the query directly — meaning users never click to your site. Showing up in AI Overview citations is now a core AEO goal.

Google DeepMind

101
Google DeepMind is the merged AI division formed in 2023 when Google combined Google Brain and DeepMind. It's responsible for the Gemini model family, AlphaFold (protein structure prediction), and most of Google's foundational AI research. Gemini models power Google products (Search, Workspace, Android AI features) and are offered through the Google Cloud Vertex AI platform.

Microsoft Copilot

101also: Copilot, GitHub Copilot
Microsoft Copilot is the brand Microsoft uses for AI assistants embedded across its products: Copilot in Windows, Microsoft 365 Copilot (Word, Excel, Outlook, Teams), and GitHub Copilot for coding. Most Copilot features are powered by OpenAI models under the hood, via Microsoft's exclusive Azure partnership with OpenAI. Microsoft 365 Copilot is licensed per-user ($30/mo business pricing).

OpenAI

101
OpenAI is the San Francisco AI lab founded in 2015 that builds the GPT family of language models, ChatGPT, DALL-E (image generation), Sora (video generation), and Whisper (speech). OpenAI has an exclusive cloud partnership with Microsoft and offers its models via the OpenAI API for developers and the Azure OpenAI Service for enterprises.

Perplexity

101
Perplexity is an AI-powered search engine that generates direct answers to questions while citing web sources inline. Unlike ChatGPT's default mode, Perplexity always grounds answers in current web results, making it the go-to tool for research. Perplexity Pro adds access to multiple model backends (GPT-4, Claude, and others). AEO strategies target Perplexity alongside ChatGPT, Gemini, and Google AI Overviews.

AI (Artificial Intelligence)

101also: Artificial Intelligence
Artificial Intelligence is the umbrella term for computer systems designed to perform tasks normally associated with human intelligence: recognizing speech, understanding language, making decisions, identifying images. In current usage, 'AI' almost always means generative AI powered by large language models — the ChatGPT-era technology — even though the field has existed since the 1950s.

AI Agent

101also: Agentic AI, Autonomous agent
An AI agent is a production software system built around a large language model that can plan, call tools, read state, and take multi-step actions toward a defined goal. Unlike a chatbot (which just responds), an agent executes work — it validates orders, routes tickets, writes drafts, or queries databases — and returns a result or escalation, not just a conversation.AI Agents service

Chatbot

101
A chatbot is any software that simulates conversation with a user, typically through a chat UI. Pre-2022 chatbots were mostly rule-based or scripted (flowcharts of 'if user says X, reply Y'). Modern chatbots are usually LLM-powered, responding with generated text. A chatbot is not the same as an AI agent — chatbots respond; agents execute.

Claude Projects

101
Claude Projects are workspaces inside Claude.ai that bundle a system prompt with persistent knowledge files and a conversation history. Unlike Custom GPTs, Projects are designed around long-context document use — you can upload dozens of files totaling hundreds of thousands of words and Claude can reason across all of them. Available on Claude Pro, Team, and Enterprise plans.LLM Setup service

Custom GPT

101
A Custom GPT is a version of ChatGPT that someone has configured for a specific purpose — a custom system prompt, uploaded knowledge files, optional API integrations (Actions), and a conversational style tuned to the use case. Available on ChatGPT Plus, Team, and Enterprise plans. Useful for: brand-voice content drafting, internal research assistants, workflow-specific support bots.LLM Setup service

Gemini Gems

101
Gemini Gems are custom configurations of the Gemini assistant that users can create with tailored system instructions for specific tasks. Available to Gemini Advanced subscribers and Google Workspace users with the Gemini add-on. Integrate with Google Drive, so you can reference documents in your Drive as grounding for the Gem's responses.

Generative AI

101also: GenAI
Generative AI refers to AI systems that produce new content rather than analyzing or classifying existing content. ChatGPT generates text, DALL-E generates images, Sora generates video, GitHub Copilot generates code. The term became mainstream in 2022 with the release of ChatGPT and Stable Diffusion. Most business conversations about 'AI' in 2026 are really about generative AI specifically.

GPT

101also: Generative Pre-trained Transformer
GPT stands for Generative Pre-trained Transformer — both a specific family of models OpenAI ships (GPT-3, GPT-4, GPT-5) and the broader architecture most modern LLMs use. In casual usage, people often say 'GPT' to mean 'any language model' the way they say 'Kleenex' to mean tissue. Technically, Claude and Gemini are also transformer-based but not 'GPT' models.

Hallucination

101also: Confabulation
A hallucination is when a large language model produces output that sounds plausible and confident but is factually incorrect — made-up citations, fabricated quotes, invented statistics, non-existent features. Hallucinations are a core limitation of current LLMs: the model is predicting the most likely next words, not verifying facts. Retrieval-augmented generation and careful context engineering reduce hallucination but don't eliminate it.

LLM (Large Language Model)

101also: Large Language Model, Foundation model
A Large Language Model is a type of machine learning model trained on enormous text datasets to predict and generate language. Modern LLMs — Claude, GPT-4, Gemini, Llama — can summarize, classify, translate, write code, and reason through multi-step problems when given appropriate context. They're the core primitive behind AI agents, chatbots, and most generative AI applications.

Machine Learning

101also: ML
Machine learning is the branch of AI where computer systems learn patterns from example data rather than being programmed with explicit rules. Classical ML includes techniques like decision trees and linear regression; modern deep learning uses neural networks with many layers. Large language models are a specific application of deep learning applied to language, trained on enormous text datasets.

Multimodal

101
A multimodal model can understand and generate across multiple content types — text, images, audio, video — in a single system. GPT-4 can read images. Gemini can analyze video. Claude can parse PDFs and images. Practical implications: you can upload a screenshot and ask about what's in it, or dictate audio to get a transcript + structured analysis in one pass.

Prompt

101
A prompt is the input you give to an AI model — the message, question, instruction, or document you're asking it to process. A good prompt is specific, provides necessary context, and states the desired output format. Prompts can include examples (few-shot prompting), step-by-step reasoning requests (chain-of-thought), or structured templates.

Prompt Engineering

101
Prompt engineering is the discipline of writing effective prompts — choosing the right instructions, examples, formatting, and framing to get reliable, high-quality output from an LLM. It ranges from simple tactics (be specific, give examples, request a format) to sophisticated patterns (chain-of-thought, few-shot, structured output schemas). Closely related to but narrower than context engineering.

System Prompt

101also: System message, System instruction
A system prompt is a special, persistent instruction that sets an AI model's role, tone, constraints, and behavior for an entire conversation. It's different from a regular message — the system prompt always applies, and the model gives it higher priority. In ChatGPT, Custom GPT instructions are a system prompt. In Claude, Project instructions are a system prompt. In API usage, it's an explicit parameter.

Token

101
A token is the unit of text a language model processes — it's approximately 3/4 of a word on average, but varies by language and content. Common short words are one token; long words can be two or three. 'Digital Braid' is 4 tokens. LLM pricing is per-token, context windows are measured in tokens, and rate limits are often in tokens-per-minute. 1,000 tokens ≈ 750 words.

API (Application Programming Interface)

101also: Application Programming Interface
An API is the structured interface a software service exposes so other programs can request data or trigger actions. When ChatGPT connects to a third-party tool, it's calling that tool's API. When Salesforce syncs with HubSpot, they're talking over APIs. APIs are the foundation of software integration — without them, you'd be stuck copying data between systems by hand.

Internal Tool

101also: Custom internal application, Internal app
An internal tool is custom software built for the specific workflows, data shapes, and business rules of a single organization — as opposed to SaaS products that try to serve many customers. Internal tools are the right choice when off-the-shelf software forces the team to work around it, or when a critical workflow currently lives in spreadsheets because no product handles it exactly right.Custom Tools service

No-code / Low-code

101also: No-code platform, Low-code
No-code and low-code tools let people build software through visual interfaces (drag-and-drop, if-this-then-that, form builders) instead of writing code. Zapier, Make, n8n, Airtable, Bubble, and Webflow are common examples. Great for quick automations and simple apps; hit ceilings on complex logic, custom data models, or performance. Often the first solution companies try before hiring engineers or a custom-tools firm.

Workflow Automation

101
Workflow automation is the design and deployment of software systems that execute multi-step business processes — content production, client onboarding, reporting pipelines, data validation — from start to finish with minimal manual intervention. Modern workflow automation commonly integrates LLMs for the steps that require judgment or language generation, while keeping rule-based logic for deterministic steps.Workflow Automation service

Zapier / Make / n8n

101
Zapier, Make (formerly Integromat), and n8n are the most common no-code automation platforms. They all let you connect SaaS tools and trigger multi-step workflows — 'when X happens in Tool A, do Y in Tool B.' Zapier is simplest and most popular; Make is more powerful for complex logic; n8n is open-source and self-hostable. All three now support AI steps via OpenAI, Anthropic, and other model APIs.

SEO (Search Engine Optimization)

101also: Search Engine Optimization
SEO is the practice of improving a website's visibility in organic (non-paid) search engine results — historically focused on Google. Core levers: technical site health (crawlability, page speed), content quality and relevance to target queries, and authority signals (backlinks, brand entity presence). Traditional SEO still drives most organic traffic but is increasingly complemented by AEO as AI search takes share.

Marketing Engineer

101also: Growth engineer
A marketing engineer is a hybrid practitioner who writes production-grade software to solve marketing and growth problems — building automation pipelines, custom internal tools, AI agents, content systems, and data infrastructure. Unlike a developer who implements specs, a marketing engineer owns the strategic reasoning for what to build; unlike a strategist, they actually ship the system.What is a marketing engineer?

POC / MVP / Production

101also: Proof of Concept, Minimum Viable Product
POC (Proof of Concept) is a throwaway demo that shows an AI approach can work. MVP (Minimum Viable Product) is a real but minimal version that real users can use. Production is a version that's reliable enough to trust with day-to-day work. Most enterprise AI projects fail at the POC-to-MVP boundary — the demo impresses, but the leap to real users requires engineering no one planned for.Why most enterprise AI projects never ship

Training Data

101
Training data is the content an AI model was exposed to during training — for modern LLMs, that's typically trillions of words scraped from the web, books, code repositories, and licensed datasets. Training data determines what a model knows and what it can generate. It also raises copyright, licensing, and bias questions — models can reflect biases or copyrighted patterns from their training data.

Full reference

All terms grouped by topic, including intermediate and advanced entries.

Platforms & Companies

Anthropic

101
Anthropic is the AI safety company that builds Claude. Founded in 2021 by siblings Dario and Daniela Amodei along with other former OpenAI researchers, Anthropic emphasizes constitutional AI and safety research. Anthropic introduced the Model Context Protocol (MCP) and has major partnerships with AWS and Google Cloud.

ChatGPT

101
ChatGPT is the chatbot interface OpenAI released in late 2022 that kicked off the mainstream generative AI wave. It's powered by the GPT family of models (currently GPT-4 / GPT-4 Turbo / GPT-5). ChatGPT Team and ChatGPT Enterprise add collaboration, admin controls, and data privacy commitments for organizations. Custom GPTs (see below) let users build tailored versions.

Claude

101
Claude is the AI assistant built by Anthropic, the company founded by former OpenAI researchers focused on AI safety. Claude is known for long context windows (200K+ tokens), strong writing quality, and careful reasoning on complex tasks. Claude Projects let teams attach persistent context files; Claude Teams and Enterprise plans add admin controls. Digital Braid defaults to Claude for most production work.LLM setup service

Gemini

101also: Google Gemini, Bard (former name)
Gemini is Google DeepMind's family of AI models (Gemini 1.5, Gemini 2.0, etc.) and also the name of the consumer and business AI assistants that use them. Gemini is deeply integrated with Google Workspace (Docs, Sheets, Gmail) and supports multimodal inputs (text, image, audio, video). Gemini Gems are Google's answer to Custom GPTs.

Google AI Overviews

101also: AI Overviews, SGE
Google AI Overviews (previously the Search Generative Experience) is the AI-generated summary Google shows at the top of many search results pages. The overview synthesizes content from indexed sources, cites a few of them, and often answers the query directly — meaning users never click to your site. Showing up in AI Overview citations is now a core AEO goal.

Google DeepMind

101
Google DeepMind is the merged AI division formed in 2023 when Google combined Google Brain and DeepMind. It's responsible for the Gemini model family, AlphaFold (protein structure prediction), and most of Google's foundational AI research. Gemini models power Google products (Search, Workspace, Android AI features) and are offered through the Google Cloud Vertex AI platform.

Microsoft Copilot

101also: Copilot, GitHub Copilot
Microsoft Copilot is the brand Microsoft uses for AI assistants embedded across its products: Copilot in Windows, Microsoft 365 Copilot (Word, Excel, Outlook, Teams), and GitHub Copilot for coding. Most Copilot features are powered by OpenAI models under the hood, via Microsoft's exclusive Azure partnership with OpenAI. Microsoft 365 Copilot is licensed per-user ($30/mo business pricing).

OpenAI

101
OpenAI is the San Francisco AI lab founded in 2015 that builds the GPT family of language models, ChatGPT, DALL-E (image generation), Sora (video generation), and Whisper (speech). OpenAI has an exclusive cloud partnership with Microsoft and offers its models via the OpenAI API for developers and the Azure OpenAI Service for enterprises.

Perplexity

101
Perplexity is an AI-powered search engine that generates direct answers to questions while citing web sources inline. Unlike ChatGPT's default mode, Perplexity always grounds answers in current web results, making it the go-to tool for research. Perplexity Pro adds access to multiple model backends (GPT-4, Claude, and others). AEO strategies target Perplexity alongside ChatGPT, Gemini, and Google AI Overviews.

AI Basics

AI (Artificial Intelligence)

101also: Artificial Intelligence
Artificial Intelligence is the umbrella term for computer systems designed to perform tasks normally associated with human intelligence: recognizing speech, understanding language, making decisions, identifying images. In current usage, 'AI' almost always means generative AI powered by large language models — the ChatGPT-era technology — even though the field has existed since the 1950s.

AI Agent

101also: Agentic AI, Autonomous agent
An AI agent is a production software system built around a large language model that can plan, call tools, read state, and take multi-step actions toward a defined goal. Unlike a chatbot (which just responds), an agent executes work — it validates orders, routes tickets, writes drafts, or queries databases — and returns a result or escalation, not just a conversation.AI Agents service

Chatbot

101
A chatbot is any software that simulates conversation with a user, typically through a chat UI. Pre-2022 chatbots were mostly rule-based or scripted (flowcharts of 'if user says X, reply Y'). Modern chatbots are usually LLM-powered, responding with generated text. A chatbot is not the same as an AI agent — chatbots respond; agents execute.

Claude Projects

101
Claude Projects are workspaces inside Claude.ai that bundle a system prompt with persistent knowledge files and a conversation history. Unlike Custom GPTs, Projects are designed around long-context document use — you can upload dozens of files totaling hundreds of thousands of words and Claude can reason across all of them. Available on Claude Pro, Team, and Enterprise plans.LLM Setup service

Custom GPT

101
A Custom GPT is a version of ChatGPT that someone has configured for a specific purpose — a custom system prompt, uploaded knowledge files, optional API integrations (Actions), and a conversational style tuned to the use case. Available on ChatGPT Plus, Team, and Enterprise plans. Useful for: brand-voice content drafting, internal research assistants, workflow-specific support bots.LLM Setup service

Fine-tuning

Advanced
Fine-tuning is the process of taking a pre-trained LLM and continuing its training on a smaller, task-specific dataset to specialize its behavior. It's used when prompt engineering alone can't reliably produce the desired output — typically for highly structured tasks, specialized classification, or strong brand-voice requirements. Most production AI work in 2026 succeeds with context engineering and skips fine-tuning.

Gemini Gems

101
Gemini Gems are custom configurations of the Gemini assistant that users can create with tailored system instructions for specific tasks. Available to Gemini Advanced subscribers and Google Workspace users with the Gemini add-on. Integrate with Google Drive, so you can reference documents in your Drive as grounding for the Gem's responses.

Generative AI

101also: GenAI
Generative AI refers to AI systems that produce new content rather than analyzing or classifying existing content. ChatGPT generates text, DALL-E generates images, Sora generates video, GitHub Copilot generates code. The term became mainstream in 2022 with the release of ChatGPT and Stable Diffusion. Most business conversations about 'AI' in 2026 are really about generative AI specifically.

GPT

101also: Generative Pre-trained Transformer
GPT stands for Generative Pre-trained Transformer — both a specific family of models OpenAI ships (GPT-3, GPT-4, GPT-5) and the broader architecture most modern LLMs use. In casual usage, people often say 'GPT' to mean 'any language model' the way they say 'Kleenex' to mean tissue. Technically, Claude and Gemini are also transformer-based but not 'GPT' models.

Grounding

Intermediate
Grounding is the practice of tying a language model's output to specific source material — retrieved documents, structured data, API responses — rather than relying on the model's training knowledge alone. Grounded outputs can cite their sources, are much less likely to hallucinate, and stay current when underlying data changes. RAG is the most common grounding pattern.

Hallucination

101also: Confabulation
A hallucination is when a large language model produces output that sounds plausible and confident but is factually incorrect — made-up citations, fabricated quotes, invented statistics, non-existent features. Hallucinations are a core limitation of current LLMs: the model is predicting the most likely next words, not verifying facts. Retrieval-augmented generation and careful context engineering reduce hallucination but don't eliminate it.

Inference

Intermediate
Inference is the process of running a trained AI model to generate output for a given input. Training a model happens once (over months, on enormous compute); inference happens every time someone sends a query. Most AI costs in production are inference costs. Faster inference is a major area of model development — smaller models, quantization, and specialized hardware all optimize inference.

LLM (Large Language Model)

101also: Large Language Model, Foundation model
A Large Language Model is a type of machine learning model trained on enormous text datasets to predict and generate language. Modern LLMs — Claude, GPT-4, Gemini, Llama — can summarize, classify, translate, write code, and reason through multi-step problems when given appropriate context. They're the core primitive behind AI agents, chatbots, and most generative AI applications.

Machine Learning

101also: ML
Machine learning is the branch of AI where computer systems learn patterns from example data rather than being programmed with explicit rules. Classical ML includes techniques like decision trees and linear regression; modern deep learning uses neural networks with many layers. Large language models are a specific application of deep learning applied to language, trained on enormous text datasets.

MCP (Model Context Protocol)

Advancedalso: Model Context Protocol
The Model Context Protocol is an open standard introduced by Anthropic that defines how LLM applications connect to external tools, data sources, and APIs. MCP servers expose structured capabilities — read a database, call an API, run a search — that Claude and other MCP-compatible clients can invoke at runtime. It replaces custom per-integration glue with a uniform protocol.

Multimodal

101
A multimodal model can understand and generate across multiple content types — text, images, audio, video — in a single system. GPT-4 can read images. Gemini can analyze video. Claude can parse PDFs and images. Practical implications: you can upload a screenshot and ask about what's in it, or dictate audio to get a transcript + structured analysis in one pass.

Neural Network

Intermediate
A neural network is a machine learning architecture loosely inspired by biological brains — layers of interconnected 'neurons' (mathematical units) that transform input data through learned weights. Deep neural networks have many layers and are the foundation of modern AI including LLMs, image generators, and speech models. Training adjusts the weights; inference applies them.

RAG (Retrieval-Augmented Generation)

Intermediatealso: Retrieval Augmented Generation
Retrieval-Augmented Generation is an architecture that pairs an LLM with a searchable document store. When a user asks a question, the system retrieves the most relevant content from the store, injects it into the prompt, and the LLM generates an answer grounded in that content. RAG is how most production AI tools deliver accurate, source-grounded answers without retraining the model.

Prompting & Context

Chain of Thought

Intermediatealso: CoT
Chain of thought prompting is a technique where you explicitly ask the model to work through its reasoning step-by-step before producing a final answer. The simplest version: add 'Let's think step by step' to your prompt. More advanced versions use structured formats. Chain of thought significantly improves accuracy on math, logic, and multi-step tasks — often by double-digit percentages.

Context Engineering

Intermediatealso: Context window management
Context engineering is the discipline of deliberately structuring everything an LLM receives as input — the system prompt, relevant documents, examples, tool definitions, and conversation state — so the model produces accurate, on-brand, and task-specific output. It goes beyond one-off prompt tweaking and treats context as infrastructure to be designed, versioned, and maintained.LLM Setup service

Context Window

Intermediate
The context window is the maximum amount of text a large language model can process in one request — including your prompt, any uploaded documents, conversation history, and the model's generated response. Measured in tokens. Claude 3.5 Sonnet: 200K tokens (~150K words). GPT-4 Turbo: 128K tokens. Gemini 1.5 Pro: 1M+ tokens. Larger windows enable more sophisticated grounding but slow down inference.

Few-shot Prompting

Intermediatealso: In-context learning
Few-shot prompting is a technique where you include several examples of the desired input-output pattern in your prompt before asking the model to handle a new case. For example: two examples of customer emails + ideal responses, then the new email you want triaged. The model uses the examples to infer the pattern. Few-shot reliably outperforms zero-shot (no examples) for structured or stylistic tasks.

Prompt

101
A prompt is the input you give to an AI model — the message, question, instruction, or document you're asking it to process. A good prompt is specific, provides necessary context, and states the desired output format. Prompts can include examples (few-shot prompting), step-by-step reasoning requests (chain-of-thought), or structured templates.

Prompt Engineering

101
Prompt engineering is the discipline of writing effective prompts — choosing the right instructions, examples, formatting, and framing to get reliable, high-quality output from an LLM. It ranges from simple tactics (be specific, give examples, request a format) to sophisticated patterns (chain-of-thought, few-shot, structured output schemas). Closely related to but narrower than context engineering.

System Prompt

101also: System message, System instruction
A system prompt is a special, persistent instruction that sets an AI model's role, tone, constraints, and behavior for an entire conversation. It's different from a regular message — the system prompt always applies, and the model gives it higher priority. In ChatGPT, Custom GPT instructions are a system prompt. In Claude, Project instructions are a system prompt. In API usage, it's an explicit parameter.

Temperature

Intermediate
Temperature is a parameter (typically 0 to 2) that controls how random an LLM's output is. At 0, the model picks the most likely next word every time — very deterministic, good for extraction and structured output. At 1, the default, output is varied and feels natural. Above 1.2, output gets creative and sometimes incoherent. Most production code uses 0 to 0.3 for reliability.

Token

101
A token is the unit of text a language model processes — it's approximately 3/4 of a word on average, but varies by language and content. Common short words are one token; long words can be two or three. 'Digital Braid' is 4 tokens. LLM pricing is per-token, context windows are measured in tokens, and rate limits are often in tokens-per-minute. 1,000 tokens ≈ 750 words.

Automation & Integration

API (Application Programming Interface)

101also: Application Programming Interface
An API is the structured interface a software service exposes so other programs can request data or trigger actions. When ChatGPT connects to a third-party tool, it's calling that tool's API. When Salesforce syncs with HubSpot, they're talking over APIs. APIs are the foundation of software integration — without them, you'd be stuck copying data between systems by hand.

Internal Tool

101also: Custom internal application, Internal app
An internal tool is custom software built for the specific workflows, data shapes, and business rules of a single organization — as opposed to SaaS products that try to serve many customers. Internal tools are the right choice when off-the-shelf software forces the team to work around it, or when a critical workflow currently lives in spreadsheets because no product handles it exactly right.Custom Tools service

No-code / Low-code

101also: No-code platform, Low-code
No-code and low-code tools let people build software through visual interfaces (drag-and-drop, if-this-then-that, form builders) instead of writing code. Zapier, Make, n8n, Airtable, Bubble, and Webflow are common examples. Great for quick automations and simple apps; hit ceilings on complex logic, custom data models, or performance. Often the first solution companies try before hiring engineers or a custom-tools firm.

RPA (Robotic Process Automation)

Intermediatealso: Robotic Process Automation
Robotic Process Automation uses software bots to replicate human UI interactions — clicking buttons, filling forms, moving data between applications — without requiring API integrations. RPA works best for high-volume, deterministic tasks inside legacy systems. For workflows requiring judgment, language, or unstructured data, AI agents have largely replaced RPA as the preferred approach.

Webhook

Intermediate
A webhook is a URL that a service calls automatically when something happens — the inverse of an API (where you ask the service for data). When a Stripe payment succeeds, Stripe can hit your webhook URL to notify you. When a form is submitted, the form tool can hit your webhook. Webhooks are the event-driven plumbing that lets workflow automations respond to things in real time.

Workflow Automation

101
Workflow automation is the design and deployment of software systems that execute multi-step business processes — content production, client onboarding, reporting pipelines, data validation — from start to finish with minimal manual intervention. Modern workflow automation commonly integrates LLMs for the steps that require judgment or language generation, while keeping rule-based logic for deterministic steps.Workflow Automation service

Zapier / Make / n8n

101
Zapier, Make (formerly Integromat), and n8n are the most common no-code automation platforms. They all let you connect SaaS tools and trigger multi-step workflows — 'when X happens in Tool A, do Y in Tool B.' Zapier is simplest and most popular; Make is more powerful for complex logic; n8n is open-source and self-hostable. All three now support AI steps via OpenAI, Anthropic, and other model APIs.

Engineering Practice

Evals (AI Evaluations)

Advancedalso: AI evaluation, LLM evals
Evals are structured test suites for AI systems — sets of inputs paired with expected properties of outputs, run automatically to measure quality, accuracy, safety, and consistency. Think unit tests, but fuzzier. Evals catch regressions when models are swapped or prompts change. The best AI engineering teams invest as much in evals as in prompts themselves, because evals are what tell you the system still works.

Marketing Engineer

101also: Growth engineer
A marketing engineer is a hybrid practitioner who writes production-grade software to solve marketing and growth problems — building automation pipelines, custom internal tools, AI agents, content systems, and data infrastructure. Unlike a developer who implements specs, a marketing engineer owns the strategic reasoning for what to build; unlike a strategist, they actually ship the system.What is a marketing engineer?

Observability

Advanced
Observability is the discipline of instrumenting software — traces, structured logs, metrics, prompt/response capture — so engineers can diagnose what the system actually did, not just whether it returned an error. For AI agents and LLM pipelines, observability is especially critical: without it, a subtle prompt regression can silently degrade output quality for weeks before anyone notices.

POC / MVP / Production

101also: Proof of Concept, Minimum Viable Product
POC (Proof of Concept) is a throwaway demo that shows an AI approach can work. MVP (Minimum Viable Product) is a real but minimal version that real users can use. Production is a version that's reliable enough to trust with day-to-day work. Most enterprise AI projects fail at the POC-to-MVP boundary — the demo impresses, but the leap to real users requires engineering no one planned for.Why most enterprise AI projects never ship

Data & Infrastructure

Embedding

Advancedalso: Text embedding, Vector embedding
An embedding is a numeric vector — typically 384 to 3,072 dimensions — that represents text, images, or other content in a way that encodes semantic meaning. Two pieces of content with similar meaning produce embeddings that are close together in vector space. Embeddings are the foundation of semantic search, retrieval-augmented generation, content clustering, and recommendation systems.

Training Data

101
Training data is the content an AI model was exposed to during training — for modern LLMs, that's typically trillions of words scraped from the web, books, code repositories, and licensed datasets. Training data determines what a model knows and what it can generate. It also raises copyright, licensing, and bias questions — models can reflect biases or copyrighted patterns from their training data.

Vector Database

Advancedalso: Vector DB, Vector store
A vector database stores high-dimensional numeric representations of content — embeddings — and supports fast similarity search across them. It's the storage layer behind most RAG applications: documents are converted to vectors at ingest time, queries are converted at retrieval time, and the database returns the most semantically similar items. Common implementations include Pinecone, Weaviate, and Postgres with pgvector.

Term missing that should be here? Email ryan@digitalbraid.com. Or jump over to the insights for deeper dives.