Application-layer attacks need application-layer auditors. The cheap end of the Bangalore VAPT market — which prices a full engagement at ₹40,000–₹80,000 — runs Nessus or Acunetix against your application, formats the output into a 30-page PDF, and calls it a web application security audit. The output catches outdated jQuery libraries and missing security headers; it misses every business-logic flaw, every IDOR, every authorisation bypass, every authentication edge case, and every integration-seam issue between your application and its dependencies. Those are the findings that actually matter — the ones a determined attacker exploits and the ones your enterprise buyer’s security review asks about. This page describes the methodology our application-security specialists use to find them.
Why web application testing is different
Network VAPT and web application testing share the word "VAPT" but are different disciplines. Network VAPT examines the infrastructure plane — services, ports, protocols, configurations — and is largely tool-driven (Nessus, Nmap, Nuclei). Web application testing examines the application plane — the business logic implemented by the code your engineers wrote — and is largely manual. The skill set is different (a senior network engineer is rarely a senior application-security engineer and vice versa), the toolchain is different (Burp Suite Pro is the canonical web-app tool, equivalent to Nessus for the network world), and crucially the findings are different. The same client running a network VAPT and a web application test will receive two reports with almost no overlap.
The bulk of the value in a web application engagement comes from manual analysis. Automated tools find about 25% of real issues (typically the obvious ones — outdated dependencies, missing headers, basic XSS). The remaining 75% — and 100% of the highest-impact findings — come from an engineer who reads your sequence diagrams, runs your authentication flow under a debugger, traces your authorisation checks across endpoints, and identifies the moments where intent and implementation diverge.
Who in Bangalore needs this
Most Bangalore B2B SaaS companies need a serious web application engagement at least annually. The drivers vary by buyer mix:
SaaS companies pursuing SOC 2 / ISO 27001
Both frameworks require evidence of application-security testing. Generic network VAPT does not cover the application layer adequately for either auditor. We co-deliver web application engagements alongside SOC 2 and ISO 27001 in roughly 50% of those engagements; the bundling is detailed on our SOC 2 service page.
Fintech and BFSI vendors
RBI and SEBI require quarterly application-security testing for digital channels and customer-facing apps. Pure network VAPT does not satisfy the regulator’s expectation of "comprehensive penetration testing" — application-layer is now table stakes.
HealthTech and EdTech consumer products
Consumer apps handling PII or PHI need application-layer focus on data exposure, IDOR (the single highest-impact class for multi-tenant consumer apps), and authentication edge cases.
Companies after a public security incident
If your application has been the target of a credential-stuffing attack, a data-exposure event, or any other application-layer incident, the post-incident audit needs to be application-layer. We have run dozens of post-incident engagements in Bangalore and the typical finding is that the original attack vector is paired with two or three additional unexploited issues that would have been the next step.
OWASP Top 10 (2021) — what we test
The OWASP Top 10 is the most-referenced application security awareness document, currently in its 2021 revision (the 2025 revision is in draft as of this page’s last edit). Each category in the Top 10 maps to a specific test plan in our engagement.
A01 · Broken Access Control
The largest and most-impactful category. We test for IDOR (Insecure Direct Object Reference) across tenancy boundaries — can user A access user B’s data by changing an ID in the URL, request body, or JWT claim — vertical privilege escalation (free-tier accessing paid features), horizontal privilege escalation (admin functions accessible by non-admins), forced browsing, mass-assignment in update APIs, and CORS misconfiguration. This is where business-logic testing pays the highest return.
A02 · Cryptographic Failures
We test TLS configuration, cipher selection, certificate validation, hashing algorithm choice (bcrypt / Argon2 / scrypt for passwords; SHA-256 minimum for non-password use; never MD5 or SHA-1), encryption at rest, key management, and JWT signing implementation (the alg:none vulnerability is still found in roughly 5% of Bangalore engagements; weak HMAC keys in roughly 15%).
A03 · Injection
SQL injection, NoSQL injection, command injection, LDAP injection, OS command injection, server-side template injection (SSTI), and XML external entity (XXE) injection. Modern frameworks have largely eliminated trivial SQLi but second-order injection — where data passes through a sanitiser, gets stored, and is re-used in a context where the sanitisation does not apply — is still common.
A04 · Insecure Design
Architectural-level issues: lack of rate limiting, missing authorisation in microservice-to-microservice calls, optimistic-concurrency race conditions, time-of-check / time-of-use vulnerabilities, missing audit logging, business-logic flaws (this category overlaps heavily with our dedicated business-logic test plan).
A05 · Security Misconfiguration
Default credentials, verbose error messages, exposed admin panels, exposed cloud storage, insecure HTTP headers, debug mode enabled in production, exposed git directories, exposed backup files, exposed .env files. Catches an embarrassing number of issues in every engagement.
A06 · Vulnerable and Outdated Components
Dependency-chain analysis. We scan your package.json / requirements.txt / pom.xml / composer.json / Gemfile against the Snyk and OSV databases, identify direct and transitive vulnerabilities, and flag outdated runtime versions (Node, Python, PHP, Java, Ruby).
A07 · Identification and Authentication Failures
Password policy enforcement, account-lockout behaviour, credential-stuffing resilience, session fixation, session-token entropy, multi-factor implementation, password-reset workflow, single-sign-on integration security.
A08 · Software and Data Integrity Failures
Insecure deserialisation, untrusted CI/CD pipeline configuration, missing integrity verification on auto-update flows, mutable container tags in production.
A09 · Security Logging and Monitoring Failures
Missing audit logging on security-relevant events, log injection vulnerabilities, log-tampering protection, log-retention policy validation. Often a finding rather than an exploitable issue, but graded as it affects post-incident forensics.
A10 · Server-Side Request Forgery (SSRF)
URL-fetch features, webhook validators, image proxies, RSS importers, OAuth-callback validation. SSRF is one of the highest-impact findings in cloud-native apps because it provides a path to AWS / Azure / GCP metadata services and from there to credentials.
OWASP ASVS L1, L2, L3
The OWASP Application Security Verification Standard (ASVS) is a more comprehensive control catalogue than the Top 10 — currently in version 4.0.3 with v5.0 in late draft. ASVS organises 286 controls across 14 chapters and three verification levels.
Level 1 — Opportunistic
The minimum bar for any internet-facing application. Defends against opportunistic attackers using widely-available tools. Our default scope when the engagement is buyer-driven and no specific level is mandated.
Level 2 — Standard
Applications handling sensitive data — most B2B SaaS, all fintech, all healthtech, any consumer app at scale. Our most-common engagement level. ~250 controls verified.
Level 3 — Advanced
Applications protecting high-value or critical-impact data — banking core systems, healthcare clinical decision support, critical infrastructure, life-safety. Engagement is roughly 50% larger than L2 and pricing scales accordingly.
Business-logic testing — the actual differentiator
Business-logic testing is what separates a useful engagement from a checklist exercise. The methodology requires reading your application’s sequence diagrams, running it through its expected and unexpected flows, and probing the moments where business rules cross technical boundaries.
Recurring patterns in Bangalore B2B SaaS:
- Free-tier-to-paid feature bypass — the front-end hides paid features but the back-end does not enforce the entitlement check on the API endpoint
- Workflow-state-machine skipping — order goes from "draft" to "fulfilled" without passing through "approved"
- Race conditions on monetary operations — discount-code application, refund processing, balance updates
- Tenancy-boundary IDOR in B2B SaaS — user from tenant A enumerates resources of tenant B
- Time-of-check / time-of-use on critical operations — file is verified, then re-read by the worker that processes it
- Optimistic-concurrency exploitation — submitting two requests in parallel to bypass per-resource constraints
- Privilege amplification via integration — Slack-bot OAuth scope grants more than the in-app role would
GraphQL-specific testing
GraphQL has its own class of issues that REST does not exhibit. Our GraphQL test plan covers:
- Introspection exposure — schema visible to unauthenticated users in production
- Depth-based denial of service — deeply-nested queries that explode resolver execution
- Aliasing-based denial of service — single requests with hundreds of aliased fields
- Batched-query rate-limit bypass — multiple operations in one HTTP request
- Field-suggestion-based schema discovery — error messages suggesting valid field names
- Authorisation gap at resolver level — endpoint-level auth checks are insufficient
- Mutation-side-effect exploitation — mutations triggering downstream effects without authorisation
OWASP API Top 10 (2023)
For pure API engagements (where the front-end is not in scope, or the API is a separate product), we test against the OWASP API Top 10 (2023 edition):
- API1 · Broken Object Level Authorization (BOLA) — the API equivalent of IDOR, the single most-common high-impact API finding
- API2 · Broken Authentication
- API3 · Broken Object Property Level Authorization (BOPLA) — mass-assignment combined with property-level filtering gaps
- API4 · Unrestricted Resource Consumption — rate limiting, payload size, expensive operations
- API5 · Broken Function Level Authorization
- API6 · Unrestricted Access to Sensitive Business Flows
- API7 · Server Side Request Forgery
- API8 · Security Misconfiguration
- API9 · Improper Inventory Management — abandoned endpoints, undocumented versions, deprecated paths still serving
- API10 · Unsafe Consumption of APIs
Engagement methodology
Five phases, sequential per asset, parallelised across the engagement. Detailed in the VAPT methodology section; the application-specific specialisation is heavier on the threat-modelling and business-logic phases. Output is the same — a written report with reproduction, evidence, and remediation guidance per finding.
Pricing in INR
- One web application + its API
- OWASP ASVS L1–L2
- 2–3 week engagement
- Two retest cycles
- Web app + API + GraphQL + business-logic
- OWASP ASVS L2 full coverage
- 4-week engagement
- White-box source-code review optional (+₹80,000)
- Three retest cycles
- OWASP ASVS L3 (286 controls)
- White-box code review included
- Architecture review session
- 6-week engagement
- Quarterly retainer option
Twenty common findings
- IDOR across tenancy boundaries (multi-tenant SaaS)
- Mass-assignment in user-update APIs (role escalation)
- JWT alg:none / weak-HMAC / missing-expiry validation
- Missing rate limit on authentication / password-reset / OTP endpoints
- Server-Side Request Forgery in URL-fetch / webhook / image-proxy
- Stored XSS in admin panels (production-skipped)
- Reflected XSS in error pages, search, redirect parameters
- SQL injection in poorly-parameterised search / filter / sort
- NoSQL injection in MongoDB-backed search
- Privilege escalation via missing authorisation in microservice-to-microservice calls
- OAuth scope over-grant (Google / Slack / GitHub integrations)
- Open redirect in login / sign-up callback parameters
- Race condition in discount / refund / balance operations
- Workflow-state-machine bypass
- Hardcoded API keys in front-end JavaScript bundle
- Verbose error messages exposing stack traces / DB queries
- Insecure file upload (content-type bypass, path traversal)
- Missing CSRF protection on state-changing operations
- Subdomain takeover via dangling DNS
- Vulnerable transitive dependencies in production builds
Web app security application by Bangalore industry vertical
Industry context shapes the test plan. Below is the application of our methodology to the verticals we deliver into most often.
BFSI — Banks, NBFCs, payment aggregators
BFSI web applications carry the highest threat model — well-resourced adversaries, regulatory expectation of comprehensive testing, and direct financial-transaction surface. Specific test areas: payment-flow business-logic (double-spend, replay, concurrent-transaction races), maker-checker bypass, dormant-account hijacking, fund-transfer authorisation flow, customer-impersonation paths via support-tool integration. Our BFSI engagements run quarterly per RBI expectations and produce reports designed for RBI examination submission.
Fintech and capital markets
Trading platforms, lending platforms, wealth-management platforms. Specific test areas: order-book manipulation surfaces, KYC-bypass paths, lending-eligibility circumvention, portfolio-data leakage between users, customer-data exposure via API misconfiguration. For SEBI-regulated entities, see our SEBI CSCRF page.
HealthTech — Telemedicine, diagnostics, EHR
Web applications handling PHI face DPDP sensitive-data exposure and clinical-governance expectations. Specific test areas: patient-record IDOR (the highest-impact finding class for telemedicine), prescription-tampering paths, doctor-patient-conversation-recording exposure, lab-result IDOR, role-based access between treating physician / consulting physician / billing-staff. Our HealthTech engagements add specific PHI-exposure test cases beyond standard scope.
SaaS — B2B exporters and consumer products
The largest single category. SaaS web app testing focuses on multi-tenant security: tenant-boundary IDOR, cross-tenant authorisation, tenant-context-switching attacks, customer-data isolation under shared-database architectures. Add to that the SOC 2 / ISO 27001 evidence-collection requirement and the buyer-side vendor-security review that the report feeds.
EdTech — Children’s and adult platforms
EdTech web applications serving children carry DPDP children’s-data obligations. Specific test areas: age-verification implementation, parental-consent flow integrity, prohibition on tracking and behavioural monitoring, advertising-SDK presence (which is generally prohibited in children’s contexts under DPDP), and data-minimisation enforcement.
Government and public-sector technology
Government web applications and citizen-services platforms. Our engagements here typically run under specific tender requirements and produce CERT-In compliant reports formatted for regulator submission. Specific test areas: identity-federation security with UIDAI / Digi-Locker / CSC, citizen-data IDOR, bulk-data-export prevention, document-tampering protection.
White-box vs black-box testing — when each is right
Most web application engagements default to black-box testing — we test from the perspective of an external attacker without source-code access. White-box adds source-code access; we read the code while we test, which substantially increases the finding rate and reduces the engagement timeline by 20–30%.
White-box advantages:
- Hidden code paths surfaced (debug endpoints, admin functions not in normal UI flow)
- Hardcoded secrets and credentials surfaced quickly
- Dependency-chain analysis substantially deeper
- Business-logic flaws traced through the implementation rather than inferred from behaviour
- Race conditions identified by code-path analysis rather than fuzzing
White-box disadvantages:
- Cost slightly higher (~30–50% premium)
- Less representative of real-attacker discovery process
- Some clients are unwilling to share source code with external auditors (we operate under NDA with strict access controls; most concerns are addressed by the access architecture)
Our recommendation: white-box for greenfield applications (where finding everything matters), white-box for high-risk verticals (BFSI, HealthTech), white-box for SOC 2 / ISO 27001 evidence cycles where comprehensiveness is the goal. Black-box for periodic external-perspective testing on established applications, for buyer-driven third-party-audit-acceptable testing, and for first-time engagements where the client is uncertain about source-code sharing.
API security deep-dive
APIs are now the dominant attack surface for most B2B SaaS applications. Where the front-end UI used to be the primary attack surface, the modern architecture exposes the API directly via SDKs, mobile applications, partner integrations, and public-facing developer portals. The API is now where the value lives and the controls have to live too.
The OWASP API Top 10 (2023 edition) is the canonical baseline; our methodology layers on the surface-specific specialisation that the API may need.
REST APIs
The dominant API style. Test plan covers OWASP API Top 10 plus surface-specific items: HTTP-verb tampering, content-type confusion, conditional-request abuse (If-Match / If-None-Match exploits), bulk-endpoint abuse, idempotency-key abuse.
GraphQL APIs
Detailed treatment earlier on this page. The depth-of-query, aliasing, batched-query, and field-suggestion attack surfaces are all GraphQL-specific.
gRPC APIs
Less common in Bangalore SaaS but used in service-mesh deployments. Test plan includes Protocol Buffer reflection, streaming-message abuse, gRPC-Web bridge security, deadline-propagation analysis.
SOAP / XML APIs
Legacy but still common in BFSI integrations. XXE, XML-bomb, WS-Security misconfiguration, SOAP-action manipulation are the main surfaces.
Webhook receivers
Increasingly relevant. Webhook-signature verification, replay-attack prevention, payload-validation, error-handling without information leakage.
Our default API engagement runs against the production API style; multi-API estates (REST + GraphQL + webhooks, common in modern SaaS) are scoped together and tested as integrated.
How a typical web app engagement runs week-by-week
Week 0 is the scoping conversation and the SOW. Week 1 is reconnaissance and threat modelling, and ends with a kickoff call where we present the threat model back to your team and confirm the testing-plan priorities. Weeks 2 and 3 are active testing — daily standups with your security team, immediate escalation of any P0 findings, no waiting for the report to surface critical issues. Week 4 is report drafting and peer review by a second senior engineer; the partner overseeing the engagement signs off on the deliverable. Weeks 5 and 6 are retest cycles after your team remediates; the final retest produces a closure report stamping each finding as closed, residual, or open. Most clients also receive a one-hour debrief call with the lead auditor where the team can ask questions about specific findings and remediation approaches; this is included in the engagement fee.
Evaluating a web app security vendor — six questions
The Bangalore web application security vendor population is uneven; the questions below separate substantive vendors during procurement. 1. Manual-effort percentage: what fraction of the engagement is senior-engineer manual analysis vs automated scanner output? Below 60% manual indicates a tool-output rebrand. We are 70%+ manual on every engagement. 2. Senior-engineer ratio: how many engineers with 5+ years application security work on the engagement, and what fraction of total engagement effort is theirs? Vendors staffing junior engineers behind a senior name are common; specific staffing transparency is the antidote. 3. Business-logic methodology: ask for the threat-modelling approach used per engagement. Vendors that conflate business-logic testing with vulnerability scanning are missing the highest-impact finding class. 4. White-box willingness: ask whether source-code review is offered, and at what marginal cost. Vendors declining white-box are typically less-confident with codebase analysis. 5. Retest cycles: how many retest cycles within the SOW? Retests-as-separate-engagement is the soft signal of a vendor expecting incomplete remediation. 6. Findings filing: will the vendor file findings directly into your Jira / Linear / GitHub Issues, with severity, CVSS, and suggested owner? Vendors that decline this are typically delivering report-only engagements that produce work for your team to file findings rather than pre-filing them.
We answer all six specifically and in writing during scoping.
To start a web application security engagement, the next step is a thirty-minute scoping call. Most engagements begin within five business days.