For AI Agents
Structured Context
OpenBio doesn't give your agent a wall of text. It gives typed assertions with trust scores, provenance chains, and temporal context — exactly what LLMs need to reason about biological evidence without hallucinating.
What the payload looks like
The agents.evidenceContext endpoint returns a structured JSON object — not prose. Every assertion is individually typed with a trust layer and confidence score.
{
"subject": {
"id": "james-58",
"name": "James",
"age": 58,
"subjectType": "human"
},
"assertions": [
{
"id": "assert-001",
"type": "lab_result",
"value": 9.2,
"unit": "g/dL",
"code": "LOINC:718-7",
"display": "Hemoglobin",
"trustLayer": "source_fact",
"confidence": 0.99,
"temporalContext": {
"observedAt": "2025-03-15T08:42:00Z",
"certainty": "definite"
},
"provenance": {
"source": "Quest Diagnostics",
"protocol": "HL7v2 ORU",
"collectedAt": "2025-03-15T08:42:00Z",
"receivedAt": "2025-03-15T09:11:00Z"
}
},
{
"id": "assert-002",
"type": "genomic_variant",
"code": "EGFR_L858R",
"display": "EGFR L858R mutation",
"trustLayer": "normalized_fact",
"confidence": 0.97,
"temporalContext": {
"observedAt": "2025-01-20T14:30:00Z",
"certainty": "definite"
},
"provenance": {
"source": "Foundation Medicine",
"protocol": "NGS panel",
"collectedAt": "2025-01-20T14:30:00Z",
"receivedAt": "2025-01-21T09:00:00Z"
}
}
],
"summary": {
"totalCount": 847,
"byType": {
"lab_result": 238,
"genomic_variant": 187,
"imaging_finding": 136,
"medication": 152,
"diagnosis": 134
},
"byTrustLayer": {
"source_fact": 412,
"normalized_fact": 289,
"derived_feature": 112,
"model_output": 34
}
}
}Retrieving context
Use the SDK to retrieve structured context. Filter by trust layer to control the epistemic quality of what your agent sees.
const context = await client.agents.evidenceContext({
subjectId: 'james-58',
format: 'structured',
maxAssertions: 50,
trustLayers: ['source_fact', 'normalized_fact'],
// Optionally filter to specific evidence types
types: ['lab_result', 'genomic_variant', 'diagnosis'],
});
// context.assertions — typed array, ready for LLM input
// context.summary — aggregate stats for grounding
// context.subject — subject metadataWhy structured matters
LLMs hallucinate when they can't distinguish between a measured fact and a model prediction. OpenBio's typed payload makes that distinction explicit and machine-readable.
Without OpenBio
"Patient has hemoglobin of 9.2 g/dL, EGFR L858R mutation, and is likely responding to erlotinib treatment."
Mixed trust levels. Agent cannot tell which facts are raw measurements vs. AI predictions.
With OpenBio
{ "value": 9.2, "trustLayer": "source_fact", "confidence": 0.99 }
{ "code": "EGFR_L858R", "trustLayer": "normalized_fact", "confidence": 0.97 }
{ "type": "model_output", "confidence": 0.73 }
Each assertion carries its trust level. The agent knows exactly what to trust.