The Pre-Call Research Checklist: 15 Things to Know Before Every Discovery Call
Most reps prepare for discovery calls in one of two bad ways: they either burn 90 minutes chasing random tabs, or they show up with a five-minute skim of LinkedIn and a prayer. Both fail for the same reason: there is no system.
This playbook gives you one. It is a 20-minute, repeatable pre-call research checklist built from public data only. The goal is not to know everything. The goal is to know the few things that change the quality of your questions, your credibility, and your ability to get to next steps.
Practitioners keep repeating the same point: walk in with a hypothesis, not a blank slate. Recent commentary from sellers and leaders on LinkedIn stresses knowing current priorities, job openings, your champion’s digital footprint, the likely buying committee, and the competitive context before the call. Kyle Coleman’s checklist is a good example: leadership bets, job postings, champion footprint, buying committee, and competitors all made his top five before a first meeting at linkedin.com. Gong also makes the same point from call data: weak questions kill credibility, while “expert” questions earn longer buyer answers and better outcomes at gong.io.
Use this checklist in order. Spend about one minute per item, two minutes on the few that matter most for your motion, and stop when you have enough to form a point of view.
1. Company context
- What they actually do — Source: company homepage, product pages, About page. Check how they describe the product, customer, and business model in their own words at datadoghq.com. Why it matters: if you ask questions already answered on their site, you immediately look lazy.
- Size and stage — Source: LinkedIn company page, Crunchbase, investor relations page. LinkedIn gives employee band and headcount trend; Crunchbase gives private-stage context; IR pages confirm public status, as shown at linkedin.com/company/datadog and investors.datadoghq.com. Why it matters: company stage predicts buying speed, budget process, and how formal the evaluation will be.
- What changed in the last 90 days — Source: newsroom, press releases, earnings call page. Scan launches, acquisitions, leadership changes, expansions, or investor events at investors.datadoghq.com/news-releases. Why it matters: discovery gets sharper when you anchor on a recent trigger instead of generic pain.
- Financial health or funding status — Source: SEC EDGAR for public companies, Crunchbase for private companies. Public filings show revenue, profitability, risks, and spending posture at sec.gov/edgar/search. Why it matters: healthy companies buy for speed and scale; stressed companies buy for efficiency, risk, or consolidation.
2. Strategic signals
- Hiring pattern — Source: careers page, LinkedIn Jobs. Look at open roles by function, geography, and seniority at careers.datadoghq.com/all-jobs. Why it matters: hiring tells you where budget and executive attention are going right now.
- Tech stack clues — Source: BuiltWith, Wappalyzer, job descriptions. Use builtwith.com and wappalyzer.com, then confirm from jobs that mention tools or architecture. Why it matters: you need a live hypothesis on integrations, migration pain, and incumbent vendors.
- Recent product launches or expansion bets — Source: blog, product news, press releases. Product and expansion news often lives in the company blog or newsroom, like Datadog’s launch pages at datadoghq.com/blog. Why it matters: launches create new workflows, new pressure, and often new tooling gaps.
- Regulatory, grant, or compliance activity — Source: SEC filings, government grant databases, newsroom, industry regulator sites. For public companies, start with 10-K risk factors; for regulated verticals, check relevant agencies. Why it matters: compliance pressure creates urgency faster than feature interest.
3. Org and contact intelligence
- Who you are meeting — Source: LinkedIn profile. Confirm title, scope, and likely charter on linkedin.com. Why it matters: a VP, director, and manager need completely different questions.
- Their tenure and background — Source: LinkedIn experience history. Check whether they are new, promoted, or hired from a competitor. Why it matters: new leaders often have more appetite to change systems and prove impact.
- Their recent digital footprint — Source: LinkedIn posts, comments, podcast appearances, authored content. Kyle Coleman specifically calls this out because recent public activity reveals what the contact cares about now at linkedin.com. Why it matters: this is where you find language, priorities, and angles that actually feel personal.
- Who else is likely in the buying committee — Source: LinkedIn people tab, org chart clues, job titles, press releases. Search adjacent leaders in ops, finance, security, IT, or product. Why it matters: deals stall when you discover the real stakeholders after the first call instead of before it.
4. Competitive and sentiment context
- What tools they likely run today — Source: BuiltWith, Wappalyzer, job ads, implementation partner pages. Use multiple clues, not one source. Why it matters: discovery gets better when you ask how they use the current stack, not whether they have one.
- What review data says about their current stack — Source: G2, Gartner Peer Insights, TrustRadius. For example, G2 comparison pages show Datadog repeatedly benchmarked against Splunk, Dynatrace, New Relic, and Grafana at g2.com. Why it matters: review language gives you likely complaints, tradeoffs, and switching triggers to test in discovery.
- Known competitor activity at the account — Source: newsroom, customer stories, partner pages, job ads, social posts. If they just hired around a platform, launched an integration, or publicly backed a vendor, note it. Why it matters: you want competitor context before the buyer brings it up, not after.
What is usually missing from prep? Not more company trivia. The missing pieces are the ones that shape the conversation: current priorities, real stakeholders, current tools, and trigger events. That lines up with practitioner commentary on LinkedIn and with discovery advice from Gong and Close: deals stall when reps ask generic questions, fail to establish urgency, and leave without a grounded next step at gong.io and close.com.
Time budget: 2 minutes for company/site scan, 2 minutes for news and filings, 4 minutes for hiring and tech stack, 6 minutes for contact and buying committee, 4 minutes for competitive and review context, 2 minutes to write your opening hypothesis. Total: about 20 minutes.
Worked example: Datadog
Here is what a completed checklist looks like against a real account using public data.
- What they do: Datadog describes itself as the monitoring and security platform for cloud applications at datadoghq.com.
- Size and stage: Public company, founded in 2010, with 1,001-5,000 employees shown on LinkedIn and investor relations at linkedin.com/company/datadog and investors.datadoghq.com.
- Recent news: In April 2026 Datadog launched Experiments; in March 2026 it launched MCP Server for AI agents; in February 2026 it announced DASH 2026 at investors.datadoghq.com/news-releases.
- Financial health: The 2025 Form 10-K on EDGAR highlights continued growth but still lists profitability durability and IT spending sensitivity as risks at sec.gov.
- Hiring pattern: The careers site showed hundreds of openings globally, with heavy hiring in sales, technical solutions, engineering, AI engineering, and security at careers.datadoghq.com/all-jobs.
- Tech stack clue: Jobs reference AI platform, security engineering, cloud FinOps, and enterprise sales engineering, signaling deep investment in AI infrastructure and security workflows at careers.datadoghq.com.
- Product bet: Datadog launched Feature Flags in February 2026 and positioned it around safer software releases tied to observability data at datadoghq.com.
- Regulatory/compliance angle: As a public company selling into enterprise security and cloud operations, risk language in its 10-K points to security, growth management, and macro IT spend sensitivity at sec.gov.
- Who you are meeting: Example contact target could be a security sales engineering leader or product solutions architect identified on LinkedIn by function and region at linkedin.com.
- Tenure and background: Check whether that contact came from Splunk, New Relic, or AWS. That gives immediate context for incumbent bias and rollout style.
- Recent activity: Datadog’s LinkedIn presence and leadership pages show current messaging around AI, observability, and security at scale at linkedin.com/company/datadog.
- Buying committee: Likely stakeholders include engineering, platform, security, FinOps, and procurement based on product breadth and role mix on the careers site.
- Current tools: A target account in Datadog’s category is likely also evaluating or running Splunk, Grafana, Dynatrace, or New Relic based on G2 comparison patterns.
- Review-site sentiment: G2 comparison pages show Datadog outperforming Splunk Infrastructure Monitoring on user satisfaction and appearing frequently in observability shortlists at g2.com.
- Competitor activity: G2’s alternatives pages repeatedly connect Datadog to Splunk, Dynatrace, Grafana, and New Relic, which gives you a realistic competitor set before the first question at g2.com.
If you run this checklist before every discovery call, you will walk in with more relevant context than most reps your buyer talks to. That means better opening hypotheses, better follow-up questions, earlier objection handling, and stronger next-step conversion.
If you want more reference playbooks like this one, join SalesInt’s paid tier. That is where we publish the saved, printable frameworks serious reps actually reuse on live accounts.