Skip to main content

Introduction to Fenic

Fenic is a declarative context engineering framework that works with any agent runtime. Apply context operations (extract, chunk, retrieve, store, compact, summarize) to produce typed, tool-bounded outputs your agents can use—with inference offloading and no framework lock-in.

What is Context Engineering?

Context engineering is the practice of managing everything that goes into an LLM’s context window: retrieval results, memory, conversation history, tool responses, and prompts. It’s both an information problem (what information, in what structure) and an optimization problem (how much, when to compress, what to forget). Fenic’s declarative approach fits naturally here. Instead of writing imperative code for each context operation, you describe what your context should look like—and iterate quickly as you learn what works.

The Fenic Approach

Without FenicWith Fenic
Agent summarizes conversation → tokens consumedFenic summarizes → agent gets result; less context bloat
Agent extracts facts → tokens consumedFenic extracts → agent gets structured data
Agent searches, filters, aggregates → multiple tool callsFenic pre-computes → agent gets precise rows
Context ops compete with reasoningLess context bloat → agents stay focused on reasoning

Key Benefits

Inference Offloading

Summarization, extraction, and embedding happen outside your agent’s context window. Your runtime gets the results with less context bloat.

Framework Agnostic

Works with any agent framework (LangGraph, PydanticAI, CrewAI, etc.). Expose context as MCP tools or Python functions.

Declarative Transforms

Combine deterministic operations (filter, join, aggregate) with semantic ones (extract, embed, summarize) in a single composable flow.

Typed & Bounded

Model context relationally with strong typing. Query it precisely with result caps and token-budget awareness.

What You Can Build

Memory & Personalization

  • Curated memory packs — Extract/dedupe/redact facts; serve read-only for recall
  • Blocks & episodes — Persistent profile + recent event timeline; scoped snapshots
  • Decaying resolution memory — Window functions for temporal compression (daily → weekly → monthly)
  • Cross-agent shared memory — Typed tables accessible by multiple agents in your framework

Retrieval & Knowledge

  • Policy / KB Q&A — Parse PDFs → extract(Schema)embed → neighbors with citations
  • Chunked retrieval — Chunk/overlap you control, hybrid filter, optional re-rank

Context Operations (Inference Offloaded)

  • Summarization — Deterministic or LLM-powered, reducing context bloat so agents stay focused
  • Invariant management — Store facts that should persist; re-inject at decision points
  • Token-budget-aware truncation — Shape tool responses to fit budgets
  • …and more - Fenic’s API allows you to define any context operation you might need

Structured Context from Data

  • Entity matching — Resolve duplicates / link records
  • Theme extraction — Cluster + label patterns
  • Semantic linking — Connect records across systems by meaning
  • …and more — Fenic’s declarative API supports any data transformation your agents need

Core Concepts

Lifecycle: Hydrate → Shape → Serve → Operate

1

Hydrate

Load sources from PDFs, Markdown, CSV, JSON, or databases
2

Shape

Transform with deterministic ops (select/filter/join/window) + semantic ops (extract/embed/summarize)
3

Serve

Expose as bounded tools (MCP or Python functions) with result caps
4

Operate

Version with snapshots/tags; rollback instantly

Design Principles

PrincipleWhat It Means
Framework-agnosticWorks with any runtime that can call tools or functions
Inference offloadingContext operations happen in Fenic, not your agent’s context window
Context as typed tablesModel context relationally; query it precisely
Declarative transformsFocus on what context to build, not how—iterate fast on context strategy
Bounded tool surfacesMinimal, auditable interfaces with result caps
Immutable snapshotsVersion context for reproducibility
Runtime enablementProvide primitives; let your framework orchestrate

Architecture

Fenic uses a session-centric design where all operations flow through Session.get_or_create(). Operations build logical plans that execute lazily when you call actions like .show(), .collect(), or .write.save_as_table().
import fenic as fc

# 1. Create session with semantic capabilities
session = fc.Session.get_or_create(
    fc.SessionConfig(
        app_name="my_app",
        semantic=fc.SemanticConfig(
            language_models={
                "gpt": fc.OpenAILanguageModel(
                    model_name="gpt-4o-mini",
                    rpm=500,
                    tpm=200_000
                )
            }
        )
    )
)

# 2. Load and shape data
df = session.create_dataframe([{"text": "..."}])

# 3. Apply transformations
result = df.select(
    fc.semantic.extract(
        fc.col("text"),
        MyPydanticSchema
    ).alias("structured_data")
)

# 4. Execute and display
result.show()
LLM calls still cost tokens/$, but Fenic keeps that work out of your agent’s prompt/context window, reducing context bloat.

Next Steps

Installation

Get Fenic installed and configured

Quickstart

Build your first context pipeline in 5 minutes

Build docs developers (and LLMs) love