The procurement intelligence problem is not finding opportunities. With the right tools and coverage of TED and national platforms, a B2G team targeting European markets can surface 50, 100, or 200 relevant tenders per quarter. The problem is deciding which of those opportunities deserve your finite bid resources — and having the discipline to decline the rest.
This is the prioritization challenge, and it is where most B2G teams underperform. Not because they lack frameworks, but because they lack the data to populate those frameworks and the conviction to enforce them.
Duke's analysis of 61M+ procurement procedures reveals a consistent pattern: the highest-performing suppliers bid on fewer opportunities than their peers, invest more per bid, and win at significantly higher rates. Their secret is not better bid writing — it is better opportunity selection.
This article presents a practical prioritization framework designed for B2G teams that are serious about improving their win rates and revenue efficiency.
Why prioritization is the highest-leverage activity
Before diving into the framework, consider the math that makes prioritization so impactful.
The quality-quantity trade-off
A typical B2G bid writer can produce 10-12 quality bids per quarter. "Quality" here means proposals that are thoroughly tailored to the buyer's needs, competitively positioned, compliant with all requirements, and supported by relevant references.
That same bid writer, under pressure to respond to more opportunities, might produce 18-20 bids per quarter — but at significantly lower average quality. The relationship between bid volume and quality is not linear; it follows a step function. Beyond a certain threshold, every additional bid degrades the quality of all other active bids.
The numbers illustrate why this matters:
Scenario A — High selectivity:
- 10 bids per quarter
- Average quality score: 75/100
- Win rate: 30%
- Wins: 3 contracts
Scenario B — Low selectivity:
- 18 bids per quarter
- Average quality score: 55/100
- Win rate: 12%
- Wins: 2.2 contracts
More bids, fewer wins. The resource investment is nearly double, the output is worse, and the team is more exhausted. This is the paradox that prioritization solves.
The expected value lens
Every opportunity has an expected value: the probability of winning multiplied by the contract value. A 1 million EUR contract with a 10% win probability has the same expected value (100,000 EUR) as a 200,000 EUR contract with a 50% win probability.
But the 200,000 EUR opportunity requires less bid investment, builds a reference faster, and frees resources for additional pursuits. Expected value alone is insufficient for prioritization — but it provides the quantitative foundation.
The four-dimension scoring framework
Effective prioritization requires evaluating each opportunity across multiple dimensions. Based on patterns observed in Duke's data and the practices of high-performing B2G teams, we recommend a four-dimension framework.
Dimension 1: Strategic fit (25% weight)
Strategic fit measures how well the opportunity aligns with your market position and development objectives.
Sub-criteria:
Market alignment (1-5): Does this opportunity sit within your beachhead sectors and geographies? An IT services company targeting German public sector should score German IT tenders higher than French construction tenders, even if both are technically winnable.
Reference value (1-5): Will winning this contract create a reference that strengthens future bids? A first contract with a major federal ministry has higher reference value than a tenth contract with the same municipal buyer. Consider which references your pipeline most needs.
Buyer relationship potential (1-5): Does this buyer represent repeat business? Government buyers with recurring procurement needs (annual service contracts, framework agreements) are more strategically valuable than one-off purchases.
Strategic learning (1-5): Will this bid — win or lose — teach you something valuable about the market? Early-stage B2G teams may pursue some opportunities primarily for the market intelligence the bidding process provides.
Dimension 2: Competitive position (30% weight)
Competitive position is the strongest predictor of win probability. This dimension assesses your realistic chances based on the competitive landscape.
Sub-criteria:
Competition density (1-5): How many competitors are likely to bid? Duke's historical data can estimate bidder counts for similar tenders by sector, geography, and value range. Score 5 for niche opportunities (2-3 expected bidders) and 1 for commodity categories (15+ expected bidders).
Incumbent analysis (1-5): Is there an incumbent supplier? What is their track record? Displacing a well-performing incumbent is significantly harder than competing for a new contract or replacing an underperforming one. Historical award data reveals incumbent patterns.
Evaluation criteria alignment (1-5): Do the award criteria favor your strengths? In quality-dominant evaluations (55%+ quality weight), a technically strong supplier has an advantage. In price-dominant evaluations, the lowest-cost producer wins regardless of technical differentiation. Analyze the criteria published in the tender notice or documents.
Reference match (1-5): Do your existing references closely match what this buyer needs? Evaluators weight references from similar organizations, similar projects, and similar scales. A reference from a comparable government entity in the same sector scores far higher than a private sector reference for different work.
Unique differentiator (1-5): Do you offer something that competitors cannot easily replicate? This might be specific technology, certified expertise, local delivery presence, or existing integrations with the buyer's systems.
Dimension 3: Resource availability (20% weight)
Even a strategically perfect, competitively favorable opportunity is a bad pursuit if you cannot commit adequate resources to a quality bid.
Sub-criteria:
Team capacity (1-5): Does your bid team have bandwidth? If your best proposal writer is already managing two concurrent bids, adding a third will degrade all three. Score honestly — aspiration is not capacity.
Timeline sufficiency (1-5): Is the response window long enough for your bid process? A 15-day response window for a complex, multi-lot tender requires different resources than a 45-day window. If the timeline requires cutting process steps, the quality reduction should be factored in.
Partnership readiness (1-5): If the bid requires consortium partners or subcontractors, are they identified, engaged, and available? Partnership formation under time pressure produces weaker bids. Score highly if your partnership is pre-established.
Technical delivery confidence (1-5): Can you actually deliver what the tender requires? This seems obvious, but under pipeline pressure, teams sometimes pursue opportunities at the edge of their capabilities. Score based on your genuine delivery confidence, not your aspirational capability.
Dimension 4: Commercial viability (25% weight)
Commercial viability ensures that winning is actually worth winning.
Sub-criteria:
Contract value (1-5): Is the contract value aligned with your cost of sale? A 50,000 EUR contract that requires 120 hours of bid preparation has marginal economics. Score based on the ratio of expected revenue to bid investment.
Margin potential (1-5): Based on market pricing intelligence and your cost structure, can you price competitively while maintaining acceptable margins? Historical award data from Duke's database provides pricing benchmarks for similar contracts.
Payment terms (1-5): Government payment terms range from excellent (30-day payment, common in Nordic countries) to challenging (90-120 days in some Southern European markets). Cash flow impact matters, especially for SMEs.
Contract terms (1-5): Liability exposure, IP ownership, service level penalties, and termination provisions all affect the risk-adjusted value of the contract. Score lower for contracts with onerous or unusual terms.
Follow-on potential (1-5): Is this a standalone contract or a gateway to larger opportunities? Framework agreements, pilot projects, and initial lots within larger programs have disproportionate strategic value.
Applying the framework in practice
The scoring process
For each opportunity that passes your initial filter (matching CPV codes, geography, and value range), apply the framework:
- Score each sub-criterion on a 1-5 scale, using procurement intelligence data where available
- Calculate dimension scores as the average of sub-criteria within each dimension
- Calculate weighted total using the dimension weights (Strategic: 25%, Competitive: 30%, Resource: 20%, Commercial: 25%)
- Apply threshold — typically 3.0-3.5 out of 5.0 for a proceed decision
Threshold discipline
The threshold is the most important number in the framework, and it must be enforced consistently. When teams override the threshold — "this one is special," "we need the revenue," "the CEO knows the buyer" — they systematically allocate resources to lower-probability opportunities at the expense of higher-probability ones.
Set the threshold based on your team capacity and pipeline volume. If you can prepare 10 quality bids per quarter and your monitoring surfaces 60 qualified opportunities, your threshold should select roughly the top 15-20% (the best 10-12 opportunities, accounting for some attrition).
Dynamic adjustment
The threshold is not static. Adjust based on:
- Pipeline health: If your pipeline is thin, lower the threshold temporarily — bidding on marginal opportunities beats not bidding at all
- Team changes: A new team member learning the bid process reduces effective capacity; raise the threshold until they are productive
- Seasonal patterns: Procurement publication often peaks around fiscal year boundaries. Raise the threshold during peak periods to maintain quality
The portfolio approach
Beyond individual opportunity scoring, effective prioritization requires portfolio-level thinking.
Balancing risk and return
A pipeline of exclusively high-value, low-probability opportunities is risky — you might win nothing for an entire quarter. A pipeline of exclusively small, high-probability opportunities is safe but revenue-limiting. The optimal portfolio balances both.
Target portfolio composition:
| Category | Win probability | Contract value | Portfolio share |
|---|---|---|---|
| Anchor bids | 35-50% | Medium | 40-50% |
| Growth bids | 20-35% | Large | 30-40% |
| Strategic bets | 10-20% | Very large | 10-20% |
This composition ensures predictable baseline revenue (anchors), growth potential (growth bids), and occasional transformative wins (strategic bets).
Diversification dimensions
Diversify your portfolio across:
- Buyer type: Do not depend on a single buyer or buyer segment. Mix central government, regional authorities, and municipalities.
- Geography: Spread across 2-3 markets within your EU footprint to reduce country-specific risk (budget freezes, political changes).
- Contract type: Balance one-off contracts with framework agreements and recurring service contracts for pipeline stability.
- Timeline: Stagger bid submissions to avoid resource peaks. If three bids are due the same week, your quality will suffer on all three.
Portfolio review cadence
Review your portfolio weekly:
- Are you on track for your target number of active bids?
- Is the portfolio balanced across risk categories?
- Have any opportunities' scores changed based on new information?
- Are resources allocated proportionally to opportunity expected value?
This cadence prevents drift toward either over-commitment or pipeline gaps.
Using procurement intelligence to score better
The framework above is only as good as the data feeding it. This is where procurement intelligence transforms prioritization from subjective guesswork to data-informed decision-making.
Competition density from historical data
Duke's analysis of 61M+ procedures provides historical bidder counts by sector, geography, and value range. Instead of guessing whether a tender will attract 3 or 15 bidders, you can estimate based on comparable historical tenders.
Buyer patterns from award data
Buyer purchasing history reveals evaluation preferences, incumbent relationships, and budget patterns. A buyer whose last three IT contracts were awarded to the incumbent at steadily increasing prices signals a relationship-driven evaluation. A buyer that switched suppliers for the last two awards signals openness to new providers.
Price benchmarking from awards
Historical award values for similar contracts provide the pricing intelligence that commercial viability scoring requires. Without benchmarks, pricing is a guess. With benchmarks, it is a strategic calculation.
eForms structured data
As eForms adoption increases, more evaluation criteria, lot-level details, and buyer objectives are captured in structured fields. This data feeds directly into competitive position scoring — you can assess evaluation criteria alignment before downloading the tender documents.
Common prioritization mistakes
Mistake 1: Sunk cost bidding
"We already downloaded the documents and started the compliance check — we should finish the bid." If the opportunity scores below threshold, stop. Sunk costs are sunk. The resource recovery is worth more than the low-probability completion.
Mistake 2: Revenue desperation
During pipeline droughts, teams lower their standards and bid on everything. This is precisely wrong. During low-pipeline periods, bid quality matters more because you have fewer chances. Maintain the threshold and invest the freed resources in pipeline-building activities: buyer engagement, market research, partnership development.
Mistake 3: Ignoring competition signals
"We have a great solution — we can win this." Technical confidence without competitive context is dangerous. If three well-established incumbents are likely to bid, your great solution needs to be meaningfully differentiated to justify the pursuit. Score the competitive dimension honestly.
Mistake 4: One-dimensional evaluation
Pursuing opportunities purely on contract value, or purely on win probability, or purely on strategic importance, produces an unbalanced pipeline. The four-dimension framework exists because no single dimension is sufficient. Use all four.
How Duke supports prioritization
Duke's procurement intelligence platform provides the data foundation for every dimension of the prioritization framework:
- Strategic fit: Market coverage and CPV-level analysis across 300+ sources to identify which opportunities align with your market position
- Competitive position: Historical bidder counts, award patterns, and buyer profiling based on 61M+ procedures
- Resource planning: Early identification through PINs and planning notices gives you more time to assess resource allocation
- Commercial viability: Award value benchmarks and contract pattern analysis for pricing intelligence
The combination of comprehensive data and structured analysis makes prioritization a daily operational practice rather than an occasional strategic exercise.
Conclusion
The most important decision in government sales is not how to write a bid. It is which bid to write. Every hour invested in a low-probability opportunity is an hour stolen from a high-probability one. Every bid submitted at below-quality standard is a missed chance to demonstrate your best work.
Prioritization is not about saying no to opportunities. It is about saying yes to the right ones — with full conviction, adequate resources, and competitive intelligence. The four-dimension framework presented here provides a structured, repeatable process for making these decisions.
The discipline to enforce the framework is what separates B2G teams that win consistently from those that bid frequently and win rarely. Build the framework, populate it with data, set the threshold, and hold the line. Your win rate will tell you if it is working.