When most people work with AI, they follow what we might call a "context pushing" approach. They load up their prompts with as much context as they think the AI needs to complete a task. On the surface, this seems logical: Give the AI everything it might need upfront, and it should produce good results.
But there's a fundamental flaw in this approach: You can only provide context based on what you think the AI needs. You're operating from your own mental model of the task, which inevitably contains blind spots. You don't know what you don't know, and those gaps in context can significantly impact the quality of the AI's output.
What is context pulling?
Instead of pushing context to the AI, why not engineer your prompts to make the AI pull context from you through strategic questioning? This flips the script entirely.
The concept of context pulling is elegantly simple: you give the AI a goal, but you design the prompt so that the AI's first job is to ask you the right questions to gather the context it needs to achieve that goal. You're essentially making the AI responsible for determining what information is required, rather than hoping you've guessed correctly.
How does context pulling work?
The power of context pulling lies in prompt engineering. You craft prompts with explicit instructions for the AI to:
- Understand the goal: Know what it's ultimately trying to achieve
- Identify information gaps: Determine what context it needs to succeed
- Ask strategic questions: Pull that context from you through targeted questioning
- Drill systematically: Continue questioning until it has sufficient information
- Progress deliberately: Only move forward when it's satisfied it can deliver quality output
A practical example: CV optimization
Imagine you're working with an AI to improve your CV. Rather than uploading your CV and asking the AI to "make it better," a context pulling approach would work like as follows.
The prompt instructs the AI to:
- First, ask you for a high-level overview of all the key roles you've held
- Wait for your signal that you're done with that overview
- Then systematically work through each role, one at a time
- For each role, ask clarifying questions about metrics, impact, and specific achievements
- Refuse to move to the next role until it has sufficient detail to write compelling content for the current one
The AI becomes an interviewer, drilling down into your experience with the expertise of knowing exactly what makes a strong CV in your field.
How does context pulling produce better results?
Context pulling surfaces questions about information you wouldn't have thought to provide. To take the above example: You might not realize that quantifying the team size you managed is crucial for your CV, or that the AI needs to understand the business context of a particular achievement to position it effectively.
When the AI asks these questions, you're prompted to provide context that genuinely matters. It’s context that the AI has determined it needs based on its understanding of what makes a successful output.
This leads to fundamentally different results. The AI isn't working with the context you think it needs; it's working with the context it knows it needs. That distinction is everything.
The broader application of context pulling
While the CV example is tangible, context pulling can be applied to virtually any AI-assisted task:
- Strategy documents: The AI interviews you about market dynamics, competitive positioning, and organizational capabilities before drafting.
- Product specifications: The AI systematically pulls requirements, constraints, user needs, and technical considerations.
- Content creation: The AI gathers audience insights, tone preferences, key messages, and objectives before writing.
- Data analysis: The AI clarifies what questions you're trying to answer, what decisions hinge on the findings, and what context surrounds the data.
In each case, you're leveraging the AI's domain knowledge to ensure you're providing the right context, not just a lot of context.
5 key principles for implementing context pulling
- Design prompts with questioning frameworks: Build explicit instructions for the AI to ask questions before attempting the task.
- Make questioning systematic: Have the AI work through topics methodically, ensuring thorough coverage.
- Set completion criteria: Instruct the AI on what "sufficient context" looks like so it knows when to stop questioning and start executing.
- Embrace the interview dynamic: Accept that the best results come from dialogue, not monologue.
- Trust the AI's judgement: If the AI is asking for information, it likely needs it, even if you don't immediately see why.
The paradigm shift in human-AI collaboration
Context pulling represents a fundamental shift in how we think about AI collaboration. Instead of treating AI as a passive tool that processes whatever we feed it, we're treating it as an active partner with the expertise to know what it needs to deliver quality work.
It feels counterintuitive at first. Shouldn't we, as humans, know best what context to provide? But that assumption misses the point. The AI has been trained on patterns of what makes good outputs in countless domains. It knows what information typically matters. By letting it ask the questions, we tap into that knowledge.
The result is outputs that aren't just technically correct, but genuinely useful because they're built on a foundation of context that actually matters, systematically gathered through intelligent questioning.
See more episodes of The Vibe PM podcast here.