Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology10 min read

Imagine if your Teams or Slack messages automatically turned into secure context for your AI agents — PromptQL built it | VentureBeat

Capturing tribal knowledge organically and creating a living metadata store that informs every AI interaction with company-specific reasoning. Discover insights

TechnologyInnovationBest PracticesGuideTutorial
Imagine if your Teams or Slack messages automatically turned into secure context for your AI agents — PromptQL built it | VentureBeat
Listen to Article
0:00
0:00
0:00

Imagine if your Teams or Slack messages automatically turned into secure context for your AI agents — Prompt QL built it | Venture Beat

Overview

Credit: Venture Beat made with Google Gemini 3.1 Pro Image

Credit: Venture Beat made with Google Gemini 3.1 Pro Image

Details

For the modern enterprise, the digital workspace risks descending into "coordination theater," in which teams spend more time discussing work than executing it.

While traditional tools like Slack or Teams excel at rapid communication, they have structurally failed to serve as a reliable foundation for AI agents, such that a Hacker News thread went viral in February 2026 calling upon Open AI to build its own version of Slack to help empower AI agents, amassing 327 comments.

That's because agents often lack the real-time context and secure data access required to be truly useful, often resulting in "hallucinations" or repetitive re-explaining of codebase conventions.

Prompt QL, a spin-off from the Graph QL unicorn Hasura, is addressing this by pivoting from an AI data tool into a comprehensive, AI-native workspace designed to turn casual, regular team interactions into a persistent, secure memory for agentic workflows — ensuring these conversations are not simply left by the wayside or that users and agents have to try and find them again later, but rather, distilled and stored as actionable, proprietary data in an organized format — an internal wiki — that the company can rely on going forward, forever, approved and edited manually as needed.

Imagine two colleagues messaging about a bug that needs to be fixed — instead of manually assigning it to an engineer or agent, your messaging platform automatically tags it, assigns it and documents it all in the wiki with one click Now do this for every issue or topic of discussion that takes place in your enterprise, and you'll have an idea of what Prompt QL is attempting. The idea is a simple but powerful one: turning the conversation that necessarily precedes work into an actual assignment that is automatically started by your own messaging system.

“We don’t have conversations about work anymore," CEO Tanmai Gopal said in a recent video call interview with Venture Beat. "You actually have conversations that do the work.”

Originally positioned as an AI data analyst, the company—a spin-off from the Graph QL unicorn Hasura—is pivoting into a full-scale AI-native workspace.

It isn't just "Slack with a chatbot"; it is a fundamental re-architecting of how teams interact with their data, their tools, and each other.

“Prompt QL is this workhorse in the background, this 24/7 intern that’s continuously cranking out the actual work—looking at code, confirming hypotheses, going to multiple places, actually doing the work," Gopal said.

Technology: messages that automatically turn into a shared, continuously updated context engine

The technical soul of Prompt QL is its Shared Wiki. Traditional LLMs suffer from a "memory" problem; they forget previous interactions or hallucinate based on outdated training data.

Prompt QL solves this by capturing "shared context" as teams work. When an engineer fixes a bug or a marketer defines a "recycled lead," they aren't just typing into a void. They are teaching a living, internal Wikipedia. This wiki doesn't require "documentation sprints" or manual YAML file updates; it accumulates context organically.

“Throughout every single conversation, you are teaching Prompt QL, and that is going into this wiki that is being developed over time. This is our entire company’s knowledge gradually coming together.”

Interconnectivity: Much like cells in a Petri dish, small "islands" of knowledge—say, a Salesforce integration—eventually bridge to other islands, like product usage data in Snowflake.

Interconnectivity: Much like cells in a Petri dish, small "islands" of knowledge—say, a Salesforce integration—eventually bridge to other islands, like product usage data in Snowflake.

Human-in-the-Loop: To prevent the AI from learning "junk" (like a reminder about a doctor's appointment from 2024), humans must explicitly "Add to Wiki" to canonize a fact.

Human-in-the-Loop: To prevent the AI from learning "junk" (like a reminder about a doctor's appointment from 2024), humans must explicitly "Add to Wiki" to canonize a fact.

The Virtual Data Layer: Unlike traditional platforms that require data replication, Prompt QL uses a virtual SQL layer. It queries your data in place across databases (Snowflake, Clickhouse, Postgres) and Saa S tools (Stripe, Zendesk, Hub Spot), ensuring that nothing is ever extracted or cached,.

The Virtual Data Layer: Unlike traditional platforms that require data replication, Prompt QL uses a virtual SQL layer. It queries your data in place across databases (Snowflake, Clickhouse, Postgres) and Saa S tools (Stripe, Zendesk, Hub Spot), ensuring that nothing is ever extracted or cached,.

Prompt QL is designed to be a highly integrable orchestration layer that supports both leading AI model providers and a vast ecosystem of existing enterprise tools.

AI Model Support: The platform allows users to delegate tasks to specific coding agents such as Claude Code and Cursor, or use custom agents built for specific internal needs.

AI Model Support: The platform allows users to delegate tasks to specific coding agents such as Claude Code and Cursor, or use custom agents built for specific internal needs.

Workflow Compatibility: The system is built to inherit context from existing team tools, enabling AI agents to understand codebase conventions or deployment patterns from your existing infrastructure without manual re-explanation

Workflow Compatibility: The system is built to inherit context from existing team tools, enabling AI agents to understand codebase conventions or deployment patterns from your existing infrastructure without manual re-explanation

The Prompt QL interface looks familiar—threads, channels, and mentions—but the functionality is transformative. In a demonstration, an engineer identifies a failing checkout in a #eng-bugs channel.

Instead of tagging a human SRE, they delegate to Claude Code via Prompt QL. The agent doesn't just look at the code; it inherits the team's shared context.

It knows, for instance, that "EU payments switched to Adyen on Jan 15" because that fact was added to the wiki weeks prior.

Within minutes, the AI identifies a currency mismatch, pushes a fix, opens a PR, and updates the wiki for future reference. This "multiplayer" AI approach is what sets the platform apart.

It allows a non-technical manager to ask, "Which accounts have growing Stripe billing but flat Mixpanel usage?" and receive a joined table of data pulled from two disparate sources instantly. The user can then schedule a recurring Slack DM of those results with a single follow-up command.

Also, users don't even need to think about the integrity or cleanliness of their data — Prompt QL handles it for them: “Connect all data in whatever state of shittiness it is, and let shared context build up on the fly as you use it," Gopal said.

For Fortune 500 companies like Mc Donald's and Cisco, "just connect your data" is a terrifying sentence. Prompt QL addresses this with fine-grained access control

. The system enforces attribute-based policies at the infrastructure level. If a Regional Ops Manager asks for vendor rates across all regions, the AI will redact columns or rows they aren't authorized to see, even if the LLM "knows" the answer. Furthermore, any high-stakes action—like updating 38 payment statuses in Netsuite—requires a human "Approve/Deny" sign-off before execution.

In a departure from the "per-seat" Saa S status quo, Prompt QL is entirely consumption-based.

Pricing: The company uses "Operational Language Units" (OLUs).

Pricing: The company uses "Operational Language Units" (OLUs).

Philosophy: Gopal argues that charging per seat penalizes companies for onboarding their whole team. By charging for the value created (the OLU), Prompt QL encourages users to connect "everyone and everything".

Philosophy: Gopal argues that charging per seat penalizes companies for onboarding their whole team. By charging for the value created (the OLU), Prompt QL encourages users to connect "everyone and everything".

Enterprise Storage: While smaller teams use dedicated accounts, enterprise customers get a dedicated VPC. Any data the AI "saves" (like a custom to-do list) is stored in the customer's own S3 bucket using the Iceberg format, ensuring total data sovereignty.

Enterprise Storage: While smaller teams use dedicated accounts, enterprise customers get a dedicated VPC. Any data the AI "saves" (like a custom to-do list) is stored in the customer's own S3 bucket using the Iceberg format, ensuring total data sovereignty.

"Philosophically, we want you to connect everyone and everything [to Prompt QL], so we don’t penalize that," Gopal said. "We just price based on consumption.”

So, is Prompt QL a Teams or Slack killer? According to Gopal, the answer is yes: “That is what has happened for us. We’ve shut down our internal Slack for internal comms entirely," he said.

The launch comes at a pivot point for the industry. Companies are realizing that "chatting with a PDF" isn't enough. They need AI that can act, but they can't afford the security risks of "unsupervised" agents.

By building a workspace that prioritizes shared context and human-in-the-loop verification, Prompt QL is offering a middle ground: an AI that learns like a teammate and executes like an intern, all while staying within the guardrails of enterprise security.

For enterprises focused on making AI work at scale, Prompt QL addresses the critical "how" of implementation by providing the orchestration and operational layer needed to deploy agentic systems.

By replacing the "coordination theater" of traditional chat tools with a workspace where AI agents have the same permissions and context as human teammates, it enables seamless multi-agent coordination and task-routing. This allows decision-makers to move beyond simple model selection to a reality where agents—such as Claude Code—use shared team context to execute complex workflows, like fixing production bugs or updating CRM records, directly within active threads.

From a data infrastructure perspective, the platform simplifies the management of real-time pipelines and RAG-ready architectures by utilizing a virtual SQL layer that queries data "in place". This eliminates the need for expensive, time-consuming data preparation and replication sprints across hundreds of thousands of tables in databases like Snowflake or Postgres.

Furthermore, the system’s "Shared Wiki" serves as a superior alternative to standard vector databases or prompt-based memory, capturing tribal knowledge organically and creating a living metadata store that informs every AI interaction with company-specific reasoning.

Finally, Prompt QL addresses the security governance required for modern AI stacks by enforcing fine-grained, attribute-based access control and role-based permissions.

Through human-in-the-loop verification, it ensures that high-stakes actions and data mutations are held for explicit approval, protecting against model misuse and unauthorized data leakage.

While it does not assist with physical infrastructure tasks such as GPU cluster optimization or hardware procurement, it provides the necessary software guardrails and auditability to ensure that agentic workflows remain compliant with enterprise standards like SOC 2, HIPAA, and GDPR.

Deep insights for enterprise AI, data, and security leaders

By submitting your email, you agree to our Terms and Privacy Notice.

Key Takeaways

  • Credit: Venture Beat made with Google Gemini 3
  • Credit: Venture Beat made with Google Gemini 3
  • For the modern enterprise, the digital workspace risks descending into "coordination theater," in which teams spend more time discussing work than executing it
  • While traditional tools like Slack or Teams excel at rapid communication, they have structurally failed to serve as a reliable foundation for AI agents, such that a Hacker News thread went viral in February 2026 calling upon Open AI to build its own version of Slack to help empower AI agents, amassing 327 comments
  • That's because agents often lack the real-time context and secure data access required to be truly useful, often resulting in "hallucinations" or repetitive re-explaining of codebase conventions

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.