For AI Agents
Agent Patterns
Three short recipes for common agent workflows with OpenBio. Each pattern is self-contained and ready to adapt.
1
Retrieve evidence before answering a clinical question
Always ground the agent in structured evidence before generating a response. This prevents hallucination and ensures the answer cites real, provenance-tracked data.
typescript
import Anthropic from '@anthropic-ai/sdk';
import { OpenBio } from '@openbio/sdk';
const anthropic = new Anthropic();
const openbio = new OpenBio();
async function answerClinicalQuestion(subjectId: string, question: string) {
// Step 1: Retrieve structured evidence context
const context = await openbio.agents.evidenceContext({
subjectId,
format: 'structured',
maxAssertions: 50,
trustLayers: ['source_fact', 'normalized_fact'],
});
// Step 2: Build a grounded system prompt
const systemPrompt = `You are a clinical evidence assistant. Answer questions using
only the evidence provided. For each claim, cite the assertion ID and trust layer.
Do not infer beyond what the evidence supports.
Evidence context:
${JSON.stringify(context, null, 2)}`;
// Step 3: Call the LLM with the grounded context
const response = await anthropic.messages.create({
model: 'claude-opus-4-6',
max_tokens: 1024,
system: systemPrompt,
messages: [{ role: 'user', content: question }],
});
return response.content[0].type === 'text' ? response.content[0].text : '';
}
// Usage:
const answer = await answerClinicalQuestion(
'james-58',
'What is this patient\'s hemoglobin trend over the past 6 months?',
);2
Cross-species cohort for model training data
Build a training dataset from cross-species evidence. OpenBio's normalized assertions allow you to combine human and animal data in a single query.
typescript
import { OpenBio } from '@openbio/sdk';
const openbio = new OpenBio();
async function buildTrainingDataset() {
// Find all subjects with anemia and an NSCLC-related diagnosis
const cohort = await openbio.cohorts.query({
criteria: [
{ type: 'lab_result', code: 'LOINC:718-7', value: { lt: 12 } },
{ type: 'diagnosis', code: 'SNOMED:254637007' },
],
subjectTypes: ['human', 'canine', 'primate'],
trustLayers: ['source_fact', 'normalized_fact'],
});
// Fetch full evidence for each matching subject
const trainingData = await Promise.all(
cohort.subjects.map(async (subject) => {
const context = await openbio.agents.evidenceContext({
subjectId: subject.id,
format: 'structured',
maxAssertions: 100,
trustLayers: ['source_fact', 'normalized_fact'],
});
return {
subjectId: subject.id,
subjectType: subject.subjectType,
assertions: context.assertions,
};
}),
);
return trainingData;
}
// Returns: array of { subjectId, subjectType, assertions[] }
// Ready for ingestion into a training pipeline3
Provenance-aware citation in agent output
When an agent cites a finding, it should include the provenance chain — not just the value. This pattern fetches the full chain and formats it as an in-text citation.
typescript
import { OpenBio } from '@openbio/sdk';
const openbio = new OpenBio();
async function formatCitation(assertionId: string): Promise<string> {
const assertion = await openbio.evidence.get(assertionId);
const chain = assertion.provenanceChain;
// Build a structured citation string
const source = chain[0]; // First node = raw source
const collection = new Date(source.collectedAt).toLocaleDateString('en-US', {
year: 'numeric',
month: 'long',
day: 'numeric',
});
return `[${assertion.display ?? assertion.type}: ${assertion.value} ${assertion.unit ?? ''} · ${source.source} · ${collection} · Trust: ${assertion.trustLayer} (${Math.round(assertion.confidence * 100)}%)]`;
}
// Example output:
// [Hemoglobin: 9.2 g/dL · Quest Diagnostics · March 15, 2025 · Trust: source_fact (99%)]
// Usage in agent response generation:
const citation = await formatCitation('assert-001');
const agentResponse = `The patient's hemoglobin is critically low at 9.2 g/dL ${citation}.`;Try these patterns live
Every API call in these patterns runs against real synthetic data in the demo. Explore the endpoints in the API Playground