AI agents often struggle with memory limitations, causing errors, inconsistent outputs, and overlooked details. Context Engineering addresses this challenge by defining what AI should remember, how it retrieves information, and how it manages context efficiently.

Part 1 of this Context Engineering series explains why AI forgets, the four common failure modes, and practical strategies, WRITE and SELECT, for structuring, storing, and retrieving information. These approaches ensure reliable reasoning, accurate outputs, and scalable performance across complex, multi-step AI tasks.

Consider you’re at a restaurant, and the waiter has a wonderful memory. They recall:

Sounds wonderful, doesn’t it? The problem is that when you ask, “What’s good for lunch?” the waiter stops. They have so much information that they can’t pay attention to your simple question. If you don’t do proper Context Engineering, this is what happens to AI agents.

AI agents are getting smarter and better at doing more complicated tasks, but they have a big problem: their “working memory” is limited. AI models can only hold so much information in their active context at any one time, just like the waiter who is too busy to help you. If you load them too much, they slow down, make mistakes, or even stop working altogether.

Context Engineering is the field that deals with controlling what AI agents remember, forget, and find at each step of their work.

AI researchers define Context Engineering as the delicate art and science of filling the context window with just the right information for the next step.

A context window is the amount of text that modern Large Language Models (LLMs) can “pay attention to” when they make a response.

As Andrej Karpathy explained, “LLMs are like a new kind of operating system. The LLM is like the CPU and its context window is like the RAM.”

  • Instructions
    • System prompts
    • Few-shot examples
    • Tool descriptions
    • Response format requirements
    • Behavioral guidelines
  • Knowledge
    • Domain facts
    • Company policies
    • Historical data
    • User preferences
    • Retrieved documents
  • Tools
    • Available function descriptions
    • API specifications
    • Previous tool call results
    • Error messages and feedback
  • Note

    Test your AI agents across real-world scenarios. Try Agent to Agent Testing Today!

    Why Context Engineering Matters: The Four Failure Modes?

    When context isn’t handled correctly, AI agents break down in certain, predictable ways. Drew Breunig identified four critical failure modes that impact AI agents:

    1. Context Poisoning

    What happens: AI remembers a hallucination or mistake and keeps it in its memory between interactions.

    The DeepMind team documented this problem statement in their Gemini 2.5 technical report while the AI was playing Pokémon:

    “An especially egregious form of this issue can take place with ‘context poisoning’ – where many parts of the context (goals, summary) are ‘poisoned’ with misinformation about the game state, which can often take a very long time to undo.”

    Real-world scenario:



    Impact: The AI will always work with wrong information until someone fixes its memory. If the poisoned context changes the AI’s goals or strategy, it can “become fixated on achieving impossible or irrelevant goals.”

    2. Context Distraction

    What happens: AI gets too much information and starts to focus on things that aren’t important instead of the task at hand.

    As Drew Breunig explains, “Context Distraction is when the context overwhelms the training.” The Gemini 2.5 team observed this while playing Pokémon:

    “As the context grew significantly beyond 100k tokens, the agent showed a tendency toward favoring repeating actions from its vast history rather than synthesizing novel plans.”

    Instead of using what it learnt to come up with new plans, the agent became obsessed with doing things it had done before in its long history.

    Real-world scenario:



    Impact: The AI gets distracted by accumulated context and either repeats past actions or focuses on irrelevant details instead of addressing the actual query.

    3. Context Confusion

    What happens: Superfluous context influences responses in unexpected ways.

    Drew Breunig defines this as “when superfluous context influences the response”. The problem: “If you put something in the context, the model has to pay attention to it.”

    Even more dramatic: when researchers gave a Llama 3.1 8B model a query with all 46 tools in the GeoEngine benchmark, it failed. However, when the team reduced the selection to just 19 relevant tools, the project succeeded – even though both options were well within the 16k context window.

    Real-world scenario:



    Impact: The AI gets sidetracked by all the context it has and either does the same thing again or focuses on things that aren’t relevant instead of answering the question.

    4. Context Clash

    What happens is that too much context changes how people respond in ways that are hard to predict.

    Drew Breunig describes this as “when parts of the context disagree.”

    Microsoft and Salesforce research documented this brilliantly. They took benchmark prompts and “sharded” their information across multiple chat turns, simulating how agents gather data incrementally. The results were dramatic: an average 39% drop in performance.

    Why? Because “when LLMs take a wrong turn in a conversation, they get lost and do not recover.” The assembled context contains the AI’s early (incorrect) attempts at answering before it had all the information. These wrong answers remain in the context and poison the final response.

    Real-world scenario:



    Impact: AI makes wrong guesses based on information that isn’t directly related. The model has to pay attention to even information that isn’t relevant in the context window, which hurts performance.

    Anthropic’s research emphasises that “context is a critical but finite resource” that must be managed carefully to avoid these failures.

    These four failure modes don’t just appear in research papers – they also show up in production environments every day, especially when multiple agents collaborate or pass work between each other. In these environments, even a small mismatch in shared memory or a context handoff can ripple through the chain, producing inconsistent behavior that’s hard to trace.

    Testing each agent in isolation only catches part of the issue. You also need to see how they behave when integrated, when outputs from one become inputs for another, when reasoning chains overlap, or when context updates get out of sync. This is where agent testing becomes essential. It helps surface subtle issues like message drift, state misalignment, and reasoning divergence long before they reach production.

    To test AI agents, consider using AI-native agentic cloud platforms like LambdaTest. It offers Agent to Agent Testing where you define multiple synthetic personas and simulate chat, voice, and multimodal interactions. They help you measure the same metrics (bias, hallucination, tone consistency) at scale, under varied input types and in handoffs between agents.

    To get started, check out this LambdaTest Agent to Agent Testing guide.

    The Four Pillars of Context Engineering

    Top AI researchers and companies have agreed upon four basic ways to manage context well. This post will talk about the first two pillars. The last two will be covered in Part 2.

    Pillar 1: WRITE (Structured Context Storage)

    Core Principle: Don’t make AI remember everything that’s in active memory. Store data in formats that are easy to find and organized outside the context window.

    Technique 1.1: Scratchpads

    A scratchpad is a place where the AI can write down temporary notes while it works on a task.

    How it works:

    Think of it like sticky notes on a desk:

    • 🟨 Yellow sticky: “Don’t forget: Delta is the user’s favourite airline.”
    • 🟦 Blue sticky: “Important: The budget is $500 at most.”
    • 🟩 Green sticky: “To Do: Look up the prices of flights for October”

    The AI can:

    • When they learn something new, they should write it down on a sticky note.
    • When you need to remember something, read old sticky notes.
    • To understand what the sticky notes say, read them all. The AI doesn’t just remember everything; it writes things down and looks them up when it needs to.

    Example Usage:



    Anthropic’s multi-agent research system uses this approach extensively: “The Lead Researcher begins by evaluating the method and saving the plan to Memory to maintain context, as exceeding 200,000 tokens in the context window will result in truncation, making it crucial to preserve the plan.”

    Technique 1.2: Structured Memory Systems

    AI agents need more than just temporary scratchpads; they need long-term memory that lasts between sessions.

    There are three types of memory:

    Semantic Memory (Facts & Knowledge)

    Like knowing things about someone:

    • “Sarah likes her coffee with no sugar and no cream.”
    • “Sarah can’t eat peanuts because she’s allergic to them.”
    • “Blue is Sarah’s favorite colour.”

    The AI keeps track of things about you and what you like. It doesn’t remember when it learnt these things; it only knows that they are true.

    Episodic Memory (Past Events)

    Like remembering certain times:

    • “We fixed the login problem last Tuesday by changing the password.”
    • “We set up the new database two weeks ago.”
    • “We tried approach A last month, but it didn’t work.”

    The AI can remember certain events and what happened. Like your photo album, each memory has a date and a story.

    Procedural Memory (How-To Knowledge)

    Like knowing how to make a cake:

    • Step 1: Combine the sugar and flour.
    • Step 2: Put in the milk and eggs.
    • Step 3: Put in the oven at 350°F for 30 minutes.

    The AI knows how to do things. You follow the steps in a recipe book to get something done.

    Key Design Principles:

    • Schema-Driven: Use structured formats (JSON, databases), not free text.
    • Time-Stamped: Track when information was added.
    • Tagged: Enable filtering by category, importance, freshness.
    • Versioned: Track changes to memories over time.
    • Queryable: Support efficient lookup and retrieval.

    Pillar 2: SELECT (Intelligent Context Retrieval)

    The main idea is not to load everything. Get only what you need for the task at hand.

    Technique 2.1: Retrieval-Augmented Generation (RAG)

    RAG is probably the most important method in modern Context Engineering. It lets AI work with knowledge bases that are much bigger than their context windows.

    How RAG Works:

    Think about being in a big library with 10,000 books. “What’s the recipe for chocolate chip cookies?” someone asks.

    Without RAG (The Dumb Way):

    • Take all 10,000 books to your desk.
    • Try to read them all at once.
    • Feel overwhelmed and lost.
    • Take a long time to find the answer.

    With RAG (The Smart Way):

    Step 1: Organize the Library (Done Once)

    • Put a label on each book that says “Cooking”, “History”, “Science”, etc.
    • Make a card catalogue that works like a magic search index.
    • Every card points to books that are related.

    Step 2: Smart Search (Every Question)

    • Question: “Chocolate chip cookie recipe?”
    • The magic catalogue says, “Check the ‘Baking Cookbook’ on shelf 5!”
    • Grab ONLY that one book (not all 10,000!).

    Step 3: Quick Answer

    • Read only the cookie section (not the whole book).
    • Find the recipe.
    • Give the answer.

    You only had to work with one book instead of ten thousand! That’s RAG: finding the needle without having to carry the whole haystack.

    The RAG Pipeline in More Detail



    Advanced RAG Techniques:

    Hybrid Search (Keyword + Semantic)

    Think of two friends helping you look for a toy you lost:

    • Friend 1 (Semantic Search): “Did you lose something round and red? I It might be in the toy box with other balls!”
      • Understands meaning and finds similar things
    • Friend 2 (Keyword Search): “You said ‘red ball’? I’ll look for anything labelled ‘red’ or ‘ball’!”
      • Looks for exact words and labels
    • Together: They put their results together and show you the best ones!
      • Friend 1 found three types of balls: a red ball, a red bouncy ball, and a rubber ball.
      • Friend 2 found: a red ball, a ball pit, and a red balloon.
      • Combined: red ball (both found it!), bouncy ball, rubber ball < Best results!

    Using both methods together finds better answers than using just one!

    Contextual Chunk Retrieval

    Imagine reading a storybook, and you find the perfect sentence on page 42:

    Without Context:

    • You only read that one sentence.
    • “…and then she opened the door.”
    • Wait… who is “she”? What door? Why?

    With Context (The Smart Way):

    • Read the page before (page 41): “Sarah walked nervously toward the old house…”
    • Read the target page (page 42): “…and then she opened the door.”
    • Read the page after (page 43): “Inside, she found the mysterious box…”

    Now you understand! Sarah is the character, it’s an old house door, and she’s searching for something.

    The lesson is to not just take the exact piece; take a little bit before and after to get the whole picture!

    Reranking for Precision

    I think about what the best pizza toppings are:

    First Search: Someone quickly grabs 10 random toppings from the pantry:

    • Pepperoni ✓ (good!)
    • Chocolate chips ✗ (weird on pizza…)
    • Mushrooms ✓ (good!)
    • Gummy bears ✗ (definitely not!)
    • Cheese ✓ (perfect!)

    Reranking (The Smart Judge): Now a pizza expert looks at these 10 items and rates them:

    • Pepperoni: 9/10 ⭐⭐⭐⭐⭐
    • Cheese: 10/10 ⭐⭐⭐⭐⭐
    • Mushrooms: 8/10 ⭐⭐⭐⭐
    • Chocolate chips: 1/10 ⭐
    • Gummy bears: 0/10

    Final Selection: Pick the top 5 rated items!

    The reranker is like a specialist who double-checks the first search and puts the best items at the top!

    RAG Impact:

    If done right, RAG can make a big difference in performance. Cornell University research indicates that employing RAG for tool descriptions can enhance tool selection accuracy by threefold in comparison to loading all tools into context.

    The Berkeley Function-Calling Leaderboard demonstrates that every model does worse with more tools, even when they are within their context window limits.

    Technique 2.2: Dynamic Context Loading

    Imagine you’re packing for a day trip, and your backpack can hold 10 pounds:

    Priority 1 – Critical (MUST pack)

    • Water bottle (2 pounds) → Always pack this!
    • Your phone (1 pound) → Must have!
    • Current weight: 3 pounds

    Priority 2 – Important (Pack if room)

    • Lunch box (2 pounds) → Got room? Yes! Pack it.
    • Sunscreen (0.5 pounds) → Got room? Yes! Pack it.
    • Current weight: 5.5 pounds.

    Priority 3 – Nice-to-Have (Fill remaining space)

    • Comic book (1 pound) → Still have room? Yes!
    • Frisbee (0.5 pounds) → Still have room? Yes!
    • Skateboard (4 pounds) → Will this fit? No! (would make it 11 pounds).
    • Current weight: 7 pounds.

    Final backpack: Water, phone, lunch, sunscreen, comic book, frisbee = 7 pounds (under 10 limit!).

    The AI does the same thing! It packs the most important information first, then adds more until the backpack (context window) is nearly full, but never overflowing.

    Technique 2.3: Metadata Filtering

    The Smart Pre-Filter Strategy

    Imagine you’re looking for your red LEGO car in a huge toy room:

    Dumb Way

    • Search through ALL toys (dolls, blocks, cars, puzzles, stuffed animals…).
    • Takes forever!
    • Find 100 toys, most not even cars.

    Smart Way (Metadata Filtering)

    • Filter 1: “Only show me cars” (not dolls, not blocks).
      • Now looking at 20 cars instead of 1000 toys.
    • Filter 2: “Only red ones” (not blue, not yellow).
      • Now looking at 5 red cars.
    • Filter 3: “Only LEGOs” (not Hot Wheels, not wooden cars).
      • Found it! 1 red LEGO car.

    Why this is brilliant:

    • Faster: Searched 5 items instead of 1000.
    • More accurate: Every result is relevant.
    • Respects rules: Won’t show you toys that belong to your sibling.
    • Fresh: Can filter to “only toys bought this month”.

    The AI does this with information – it filters first, then searches!

    Real-World Example: Enterprise Documentation Assistant

    Let’s see how WRITE and SELECT work together in practice:

    Challenge: The company has 10,000 pages of documentation. The AI assistant should answer employee questions.

    Without Context Engineering

    • Try to load all 10,000 pages → Context overflow.
    • Load random pages → Wrong answers.
    • Load recent pages → Misses critical info.

    With Context Engineering (WRITE + SELECT)

    WRITE Phase



    SELECT Phase



    Result: Fast, accurate answers without context overload!

    How We’re Applying WRITE and SELECT at LambdaTest?

    At LambdaTest, we’re deeply invested in building intelligent AI agents. Here’s how we’re applying these first two pillars (without revealing proprietary details):

    Our WRITE Strategy

    Our WRITE strategy begins with a Structured Information Architecture, ensuring that every piece of data is organized for clarity and accessibility.

    Structured Information Architecture

    We organize information hierarchically, following best practices, as outlined in Daffodil Software Engineering Insights.

    Level 1: System Policies (rarely change)

    • Core functionality rules.
    • Platform capabilities.
    • Security requirements.

    Level 2: Feature Documentation (monthly updates)

    • Feature specifications.
    • Integration guidelines.
    • API references.

    Level 3: User Context (session-based)

    • Current workflow state.
    • User preferences.
    • Active task details.

    Level 4: Interaction History (turn-based)

    • Recent actions.
    • Generated outputs.
    • Feedback received.

    Why this works: AI quickly knows where to look. Need a core rule? Check Level 1. Need user preference? Check Level 3.

    Memory Management

    We implement both short-term and long-term memory:

    • Short-term (Session Memory): Tracks the current workflow, decisions made, and progress.
    • Long-term (Cross-Session Memory): Learns user patterns, common workflows, and domain-specific knowledge.

    The AI gets smarter with each interaction, like an experienced colleague!

    Our SELECT Strategy

    Our SELECT strategy focuses on Intelligent Context Retrieval, carefully identifying and extracting the most relevant information for each task.

    Intelligent Context Retrieval

    We don’t load everything at once. Instead:

    The investment in Context Engineering has fundamentally transformed our AI agents from “useful” to “production-grade””.

    Understanding the common failure modes in context management is crucial for reliable AI performance. Each type highlights how improperly handled information can undermine results:

    The first two solutions focus on capturing and retrieving information effectively. WRITE ensures that knowledge is recorded systematically and organized for easy access, while SELECT emphasizes retrieving the most relevant context to guide AI decision-making.

    Don’t try to load everything into the AI’s “backpack” at once. Instead:

    These resources expand on the foundations introduced in Part 1, offering practical insights and technical depth for anyone building reliable, context-aware AI agents. Each piece adds a unique perspective on solving memory and context challenges in real-world deployments.

    These foundational resources clarify why context failures happen and how structured engineering principles resolve them. They offer practical insights into maintaining reliable, scalable AI reasoning.

    These studies highlight real-world evidence of context degradation in advanced AI agents. They underscore the need for disciplined context management to sustain accuracy, stability, and consistent performance.


    News
    Berita
    News Flash
    Blog
    Technology
    Sports
    Sport
    Football
    Tips
    Finance
    Berita Terkini
    Berita Terbaru
    Berita Kekinian
    News
    Berita Terkini
    Olahraga
    Pasang Internet Myrepublic
    Jasa Import China
    Jasa Import Door to Door

    Kiriman serupa