Every Bangalore CTO who has solicited two VAPT quotes has stared at quote-to-quote variation of 5–10× and wondered what the difference actually buys. VAPT cost India is not arbitrary — it is the product of seven independent factors, each of which a procurement team can probe during scoping. This guide is the operational decomposition of those factors so that the next time your team writes an RFP, you can read three quotes side-by-side and know precisely why one is half the price of another.
The seven factors compound. An automated baseline scan and a manual red-team simulation are not the same service even when both are sold as “VAPT” — they sit at opposite extremes on every dimension below. Knowing where on each axis your engagement actually needs to sit produces apples-to-apples quote comparison and avoids the most common procurement failure: paying for depth you don’t need or buying shallow output that fails regulator scrutiny.
Before the cost discussion: what VAPT actually means in India
VAPT — Vulnerability Assessment and Penetration Testing — is a combined engagement that identifies security weaknesses (vulnerability assessment) and attempts to exploit them (penetration testing) to demonstrate real-world impact. In the Indian regulatory context, VAPT is mandated by:
- CERT-In for government and regulated entities, with category-specific empanelment requirements (see our CERT-In Empanelled Auditor List).
- RBI for banks, NBFCs, and payment aggregators under the Cyber Security Framework.
- SEBI for stock brokers, AMCs, and Market Infrastructure Institutions under the CSCRF (see SEBI CSCRF Compliance — Stock Broker Field Guide).
- IRDAI for insurers and insurance intermediaries.
What VAPT is not, and these confusions inflate or under-spend the budget:
- It is not a one-time checkbox — RBI and SEBI expect quarterly or half-yearly cycles for digital-channel-heavy entities.
- It is not an automated scan — a proper engagement includes manual exploitation.
- It is not a guarantee of security — it is a point-in-time assessment.
- It is not interchangeable with a code review — source-code audits are a separate discipline.
Factor 1 — Scope breadth (asset count and asset diversity)
The single biggest cost driver. The dimensions auditors price against:
- Web application count — every distinct web app, including admin portals and partner portals, is a separate asset.
- API endpoint count — modern Indian SaaS engagements routinely discover 40–80 internal-facing APIs that the development team did not initially scope.
- Mobile application count — iOS and Android count separately; each platform has distinct test cases.
- Network IP count — internal and external network ranges; cloud-native environments often have far fewer addressable IPs than on-premises.
- Cloud account count — multi-cloud (AWS + GCP + Azure) significantly expands configuration-review effort.
- Wireless networks — separate scope where in-scope.
The cost lever: doubling asset count typically increases engagement effort by 50–80% (sub-linear because some setup cost is fixed). Scope decision rule: map every asset before scoping; the most-common procurement failure is “we forgot the partner portal” surfacing as a change order during fieldwork.
Factor 2 — Testing depth (automated vs manual vs business-logic)
The same asset can be tested at three depths, each with materially different effort:
Automated scan. Nessus, OpenVAS, Burp Pro, Qualys, or comparable tools produce a CVSS-scored finding list with limited false-positive elimination. Useful as a baseline; rarely sufficient for regulator submission.
Manual penetration test. Skilled testers exploit vulnerabilities, chain findings, and validate impact. Required for CERT-In empanelled output. Typical engagement spends 60–80% of effort on manual testing once automated baseline is established.
Business-logic and IDOR testing. Manual testers map the application’s intended workflow, then test for authorisation bypass, race conditions, BOLA (Broken Object Level Authorisation), and intent-violation. The most labour-intensive depth; the hardest to automate; produces the highest-impact findings in modern API-heavy applications.
The cost lever: automated-only engagements cost 20–30% of equivalent manual-led engagements. CERT-In empanelled engagements default to manual-led depth. Depth decision rule: if your buyer is a regulator (RBI, SEBI, CERT-In) or an enterprise buyer with a security questionnaire, manual depth is mandatory; automated-only output will be rejected.
Factor 3 — Tester seniority and partner-level review
Auditor-side staffing materially affects cost and outcome quality:
- Junior analyst-led engagements — fast turnaround, lower cost, surface findings only. Suitable for compliance-baseline engagements where the regulator accepts standard findings.
- Senior consultant-led engagements — slower turnaround, higher cost, deeper findings including business-logic flaws. Suitable for product-criticality engagements where the goal is genuine attacker simulation.
- Partner-reviewed engagements — consultant-led with partner-level review of methodology and findings before report issuance. The pattern in mature CERT-In empanelled engagements; produces higher-credibility output that survives regulator scrutiny.
The cost lever: same scope, same depth, the staffing model alone can move cost by 40–80%. Staffing decision rule: for first-time engagements with regulator submission, insist on partner-reviewed output; for routine annual renewals where methodology is established, consultant-led is sufficient.
Factor 4 — Re-test policy
The most-misunderstood line item in VAPT contracts. Three common structures:
Included re-test until closure. Engagement fee includes re-testing of identified findings until validated closure. Auditor incentive aligns with quality first-pass delivery. Default in well-structured CERT-In empanelled engagements.
Capped re-test. Engagement includes one re-test cycle within 30 days of report delivery; subsequent re-tests are billed separately. Common in Tier-1 baseline engagements.
Billed re-test. Re-tests are entirely separate engagements, billed at hourly rates. Auditor incentive misaligns with quality first-pass delivery. Avoid this structure.
The cost lever: a re-test-included engagement appears 20–35% more expensive than a billed-re-test engagement on first quote, but typically costs less in total because re-test cycles are inevitable. Re-test decision rule: insist on “included until closure” or “included with one cycle”; avoid “billed separately”.
Factor 5 — Reporting granularity (CVSS table vs board-ready vs regulator-formatted)
The same finding population can be reported at three levels of granularity:
- CVSS table. Tool-output finding list with severity scores. Cheapest reporting tier; rarely sufficient for board or regulator audiences.
- Risk-prioritised report. Findings ranked by business impact rather than CVSS alone, with remediation guidance and executive summary. Typical for enterprise-grade engagements.
- Regulator-formatted report. Specific structure required by RBI, SEBI, IRDAI, or CERT-In, with mandated section headings and content. Adds reporting effort but is required for regulator submission.
The cost lever: regulator-formatted reporting adds 10–20% to engagement effort relative to risk-prioritised. Reporting decision rule: confirm in the SOW which regulator format applies; “we’ll customise the report” is a red flag for re-work after fieldwork.
Factor 6 — On-site vs remote delivery
Cloud-native environments are typically tested remotely; on-premises and air-gapped environments require on-site delivery. The cost dimensions:
- Travel and accommodation cost — direct expense passed through.
- On-site time per tester — typically 2–3 days for kickoff and exit meetings; 5–10 days for full on-site testing.
- Loss of parallel testing — on-site testing is inherently serial; remote testing can parallelise across multiple testers more easily.
The cost lever: on-site testing typically adds 15–25% relative to remote-equivalent scope. Delivery decision rule: for cloud-native SaaS, remote is sufficient and cheaper; for BFSI internal-network testing, hybrid (remote for cloud + on-site for internal network) is the typical structure.
Factor 7 — CERT-In empanelment category requirement
CERT-In empanelment is structured by service category. Tender clauses sometimes reference “CERT-In empanelled” without specifying category, leading to procurement-time disputes. The categories that matter for VAPT:
- Penetration Testing and Vulnerability Assessment — the canonical VAPT category.
- Application Security Audits — for web and mobile application engagements.
- Information Security Audit Services — broader baseline; some VAPT engagements fall here.
- Source Code Audits — for code-review-inclusive engagements.
- Wireless Network Audits — for engagements covering Wi-Fi or wireless infrastructure.
- ICT Audits — for combined IT and security review.
A firm empanelled for “Information Security Audit Services” but not for “Penetration Testing and Vulnerability Assessment” may not be acceptable for a tender that specifies the latter. The cost lever: CERT-In empanelled firms charge a structural premium over non-empanelled firms because empanelment carries audit-quality and reporting-discipline overhead. Non-empanelled firms cost less but cannot satisfy regulator-mandated tender clauses.
Empanelment decision rule: verify category-specific empanelment before issuing a PO; check the CERT-In Empanelled Auditor List for current standing.
How the seven factors compound — vertical-specific patterns
Patterns we observe across India engagements:
Bangalore SaaS — multi-cloud and API-heavy
Modern Bangalore SaaS engagements typically include 1–3 web applications, 1–2 mobile apps, 30–80 API endpoints, multi-cloud infrastructure, and supporting services (admin portal, partner portal). Manual-led depth, partner-reviewed staffing, included re-test, risk-prioritised reporting, remote delivery, CERT-In empanelment for buyer credibility. The engagement clusters in the middle of the cost band; the variable that catches founders off-guard is API count expansion during scoping.
BFSI — RBI-aligned reporting overlay
RBI-regulated entities require VAPT reports formatted to specific RBI expectations. Manual-led depth with partner review, included re-test, RBI-formatted reporting, hybrid delivery (remote cloud + on-site internal network), CERT-In empanelled mandatory. Annual cycles required; quarterly cycles for digital-channel-heavy entities. The major cost driver is breadth — banks and large NBFCs have 50+ in-scope applications, pushing toward upper-band engagements.
Fintech and payment aggregators — PCI-DSS overlap
Payment aggregators face dual VAPT requirements: CERT-In empanelled VAPT for general security assurance, and PCI-DSS-aligned testing for payment card environments. Joint delivery avoids duplication; combined fees concentrate at upper-mid band. The PCI-DSS overlay eliminates a separate ASV-scan engagement.
Crypto exchanges — wallet and key-management depth
Indian crypto exchanges registered with FIU-IND face heightened expectations on hot/warm/cold wallet testing, smart-contract review (where applicable), and key-management validation. Multi-environment scope across wallets, signing infrastructure, and trading APIs pushes engagements to the upper end of the cost band.
HealthTech — PHI access controls and DPDP overlay
Hospital and telemedicine integrations introduce healthcare-specific test cases: PHI access controls, audit-log validation for clinical data access, biometric data handling under DPDP and DISHA expectations.
Government and PSU — empanelment-rigid format
CERT-In Directions explicitly require empanelled auditors for government and PSU security assessments, with mandatory category-specific empanelment. The engagement format is more rigid than commercial engagements, with specific report templates and sometimes on-site delivery requirements.
Methodology choice — black-box, grey-box, white-box, red-team
The seven factors above implicitly assume “grey-box” methodology. Other testing models exist with different cost profiles:
Black-box (assumed-breach). Auditor receives only public-facing information; no credentials, no documentation. Most realistic to attacker perspective but slowest to demonstrate findings. Adds 30–40% to engagement cost. Recommended after grey-box engagements have established baseline coverage.
Grey-box (standard). Auditor receives credentials, documentation, and architectural context. Best balance of coverage and cost. Default for most CERT-In empanelled engagements.
White-box (full-disclosure). Auditor receives source code access, architectural documents, threat models, prior-engagement findings. Fastest to deliver comprehensive findings; lowest cost-per-finding-found. Adds only 10–15% to total cost over grey-box. Recommended for first-time engagements.
Red-team (objective-driven). Operates under “assume-breach” mandate with a specific objective (e.g., “exfiltrate the customer database”). Tests detection, response, and recovery in addition to control effectiveness. Substantially more expensive; typically reserved for mature security programmes validating detection capability.
Common VAPT procurement mistakes
- Buying on price without checking scope. Quotes that look 5× cheaper are typically scoped 5× shallower — automated-only, no re-test, surface reporting.
- Ignoring the re-test clause. Findings without verified closure are worthless for regulator reporting.
- Forgetting category-specific empanelment. A firm empanelled for “Information Security Audit” may not satisfy a tender requiring “Penetration Testing and Vulnerability Assessment”.
- Accepting a generic report. RBI, SEBI, IRDAI, and CERT-In each have specific reporting expectations. Generic output triggers re-work.
- Not involving the dev team during scoping. Scope ambiguity is the single largest cause of post-delivery disputes; engineering input on asset count is mandatory.
- Underestimating API count. Modern SaaS environments routinely have 5–10× more API endpoints than the development team initially scopes; surfacing this late is expensive.
- Ignoring continuous-engagement options. Some firms offer continuous-engagement pricing that produces faster discovery than traditional quarterly cycles.
Vendor evaluation rubric for VAPT in India
Five questions that surface vendor quality faster than asking for a quote:
- What percentage of the engagement is manual versus automated? Any firm that cannot answer is selling a scan, not a test.
- Is re-testing included until findings close, or billed separately? Separate billing incentivises low-quality first-pass delivery.
- Will the report format satisfy RBI / SEBI / CERT-In expectations specifically applicable to my entity? The answer should be specific, not “we can customise”.
- Who signs the report, and is that person the one who attends regulator meetings? Partner accountability matters.
- Can you fix the engagement fee in writing before kickoff? Variable billing is a red flag.
Cross-framework note: VAPT as evidence for SOC 2 and ISO 27001
VAPT engagement output is dual-use. The same engagement that satisfies a CERT-In tender can produce evidence used in SOC 2 Type II Security TSC fieldwork and ISO 27001:2022 A.8.29 control evidence. Scoping the VAPT engagement once with multi-framework reporting outputs is materially more efficient than running separate engagements per framework.
Practical next steps
If you are writing an RFP, download our VAPT RFP Template for a pre-structured scope document. If you need to verify your shortlisted vendor’s empanelment, see the CERT-In Empanelled Auditor List. If you want to scope a specific engagement, our VAPT services page walks through the methodology.
For a thirty-minute scoping conversation with a partner, the contact form in the site footer books the call directly. We commit to written scope, fixed engagement fee, and direct partner-level accountability through the engagement.
VAPT cost FAQ
Why is there such a wide price range for VAPT in India? Because seven independent factors compound. Automated baseline scans and manual partner-reviewed engagements are not the same service — they sit at opposite extremes on every factor.
Can I get a free VAPT? Some firms offer “free” preliminary scans to win larger engagements. The free portion is typically a Nessus or OpenVAS scan output without manual analysis. Useful as baseline awareness; not sufficient for regulator submission.
Does VAPT cost include re-test? It depends on the engagement structure. Reputable CERT-In empanelled engagements include re-testing until findings close. Lower-cost engagements often bill re-tests separately. Verify before signing.
How does VAPT differ from a vulnerability scan? A vulnerability scan is automated tool output. VAPT combines vulnerability assessment (identification) with penetration testing (manual exploitation and impact validation). The “PT” component produces business-relevant findings.
Can I run VAPT on production? Yes, with appropriate scope rules and rate limiting. Most CERT-In empanelled engagements are conducted on production with documented rules of engagement to prevent service impact.
Do I need separate VAPT for AWS, Azure, and GCP? If your environment spans multiple clouds, the engagement scope should cover all of them. Each cloud has different service models and configuration patterns; per-cloud test plans may be needed.
How often should I conduct VAPT? Annual minimum for general SaaS; quarterly for BFSI per RBI Cyber Security Framework expectations; quarterly + on-major-release for SEBI-regulated entities; per-major-release for crypto exchanges.
Do all CERT-In empanelled firms charge similarly? No. Variations of 50–100% on similar scope are common. Differentiating factors: partner-level accountability, manual-effort percentage, India regulator engagement experience, sector-specific track record.
Can a Big-4 firm be cheaper than a boutique? Rarely. Big-4 firms typically price at the high end of the range due to cost-base structure. Boutique CERT-In empanelled firms with strong technical depth often deliver comparable quality at meaningfully lower cost.
Does the engagement include a board-ready presentation? Most engagements include an executive summary; a full board presentation is sometimes a separate deliverable. Specify in the SOW if you need a board-pack walkthrough.
Is on-site testing more expensive than remote? Yes, by approximately 15–25% due to travel and time costs. Remote testing is typically sufficient for cloud-native environments; on-site is needed for internal-network testing or environments without remote access.
Multi-cycle engagement economics
Annual VAPT cycles produce learnings that compound:
- Year 1. Baseline assessment, often higher finding count, substantial remediation effort.
- Year 2. Year-1 remediation validated; new findings concentrate on changes since Year 1. Lower remediation effort.
- Year 3. Continuous improvement loop established. New findings often relate to product features added that year.
- Year 5+. Findings concentrate on emerging threat patterns and architectural decisions. The annual cadence becomes operational hygiene rather than discovery exercise.
Multi-year VAPT relationships with the same firm produce cumulative environmental knowledge that one-off engagements lose. Most BFSI clients we engage with are multi-year relationships precisely for this reason.
What separates a great VAPT engagement from an adequate one
Beyond the seven cost factors, qualitative differentiators emerge during execution:
Threat-modelling depth. Great engagements begin with a structured threat model — review of architecture diagrams, data-flow analysis, attacker-perspective mapping. Adequate engagements skip threat modelling and go straight to scanning.
Manual exploitation depth. Great engagements demonstrate exploitability through chained findings — combining a low-severity issue with a medium issue to produce high-severity impact. Adequate engagements list findings independently without exploitation chains.
Business-context relevance. Great engagements understand your business and prioritise findings by business impact. “BOLA on payment-history endpoint” is high-priority for a fintech; “BOLA on user-preferences endpoint” is lower-priority. Adequate engagements treat all BOLA findings as equivalent.
Remediation-engineering depth. Great engagements provide specific remediation guidance — code-level changes, configuration changes, architectural recommendations. Adequate engagements provide generic OWASP remediation links.
Communication during execution. Great engagements have daily check-ins surfacing findings as they emerge. Adequate engagements report at the end without intermediate communication.
Documentation quality. Great engagements produce reports your engineering team actually reads and uses. Adequate engagements produce reports filed in compliance archives without operational impact.
The economically efficient VAPT engagement is not the cheapest quote at the cheapest firm; it is the engagement scoped to actual buyer and regulator demand, delivered at the right depth for that demand, against a partner-reviewed methodology, with re-test included until closure.