Penetration Testing · Cloud Security

Cloud Security Assessment for AWS, Azure & GCP — Bangalore

Cloud security assessment for AWS, Azure, GCP, OCI and DigitalOcean. CIS benchmarks, NIST 800-53 mappings, IAM graph analysis, privilege-escalation paths, ISO 27017 mapping for SOC 2 / ISO 27001 readiness. Read-only by default. Delivered from Bengaluru.

Timeline
3–5 weeks
From (INR)
₹3,50,000
Delivered from
Bengaluru
Empanelment
CERT-In
cloud security assessment BangaloreAWS security audit IndiaAzure security review BengaluruGCP security postureCIS AWS benchmark Indiacloud IAM privilege escalation audit

Cloud security in 2026 is the place where most Bangalore B2B SaaS companies discover that their security posture and their compliance posture are different things. The application is well-tested, the team has SOC 2 readiness underway, the SOC report is signed — and the AWS organisation has 47 IAM users with AdministratorAccess, an S3 bucket with public read, and a Lambda function whose execution role can pass any role in the account. Misconfigurations get found by automated scanners. Privilege-escalation paths and lateral-movement chains require an auditor who has watched them succeed in incident-response engagements. This page describes our cloud assessment methodology, the frameworks it maps to, and the engagement model for Bangalore companies running multi-account AWS / Azure / GCP estates.

Why cloud security needs its own discipline

Three properties make cloud security a different discipline from infrastructure or application security. First, the attack surface is API-defined — every action that can be taken against your cloud environment is an API call against the provider’s control plane, and authorisation to make those calls is governed by IAM. The control plane is the new perimeter. Second, the blast radius of a misconfiguration is non-local — a single over-permissive IAM role or a single public S3 bucket can expose data across the entire organisation, irrespective of network topology. Third, the velocity of change is high — Bangalore SaaS engineering teams typically deploy cloud-infrastructure changes (Terraform, CloudFormation, Bicep, gcloud commands) several times per week, and security posture drifts continuously between assessments.

The implication is that cloud security cannot be assessed once a year and considered solved. Our methodology produces a point-in-time assessment plus a recommended set of automated checks for continuous monitoring; most Bangalore engagements convert to a retainer after the first engagement.

The shared responsibility model — what you own

Every cloud provider publishes a shared-responsibility document that delineates what they secure (the cloud) versus what you secure (in the cloud). The boundary moves with the service: for IaaS (EC2, Azure VM, GCE), you own the OS, application, identity, and data. For PaaS (RDS, Azure SQL, Cloud SQL), the provider owns the OS and infrastructure; you own identity, network configuration, and data. For SaaS (S3, Cosmos DB, Cloud Storage), the provider owns most of the stack; you own identity, configuration, and data.

The recurring failure mode in Bangalore engagements is teams assuming the provider has covered something they have not. Examples: assuming AWS encrypts everything by default (they do not — S3 encryption-at-rest with KMS is opt-in for many account configurations); assuming Azure’s default network security group denies all (it does not — default rules allow most internal traffic); assuming GCP’s default service-account permissions are minimal (the default Compute Engine service account has Editor role on the project, which is essentially admin). Our assessment maps the responsibility boundary explicitly and identifies where you have left controls undeployed.

AWS-specific methodology

AWS engagements use a combination of read-only API enumeration via tools (Prowler, ScoutSuite, CloudSploit, our internal tooling) and manual analysis. Specific test areas:

  • IAM — users, groups, roles, identity-based policies, resource-based policies, permissions boundaries, SCPs (organisation-level), service-control inheritance, federation configuration (SAML / OIDC), MFA enforcement, root-account usage, access-key rotation, password policy
  • Compute — EC2 security groups, instance profiles, EBS encryption, AMI sharing, Lambda execution roles, Lambda environment variables (secrets exposure), ECS task roles, Fargate security
  • Storage — S3 bucket policies, S3 public access block, S3 default encryption, S3 lifecycle policies, S3 access logging, EBS encryption, FSx, EFS access points
  • Database — RDS encryption, RDS public accessibility, RDS backup encryption, DynamoDB encryption, Aurora cluster security, ElastiCache security, Redshift cluster security
  • Network — VPC flow logs, security group rules, NACLs, Direct Connect / VPN configuration, Transit Gateway, Route 53 DNS security
  • Logging and monitoring — CloudTrail multi-region, CloudTrail integrity, GuardDuty enablement, Config rules, Security Hub findings, AWS WAF rules
  • Encryption and secrets — KMS key rotation, KMS policies, Secrets Manager rotation, Parameter Store secure strings, certificate management via ACM
  • Organisation-level — SCPs, AWS Organizations structure, Control Tower configuration, account-level audit-trail aggregation

Azure-specific methodology

Azure engagements cover the same conceptual areas with Azure-specific tooling (Azure Policy, Defender for Cloud, ScubaGear, Azucar, Stormspotter):

  • Entra ID (formerly Azure AD) — conditional access policies, MFA enforcement, application registrations, service principals, managed identities, privileged access management, eligible vs active role assignments, B2B / B2C tenants
  • Subscription and management group structure — RBAC, custom roles, subscription-level policy, management-group inheritance
  • Compute — VM security, disk encryption, VM extensions, App Service security, Function App security, AKS cluster security
  • Storage — Storage Account access keys, SAS tokens, Blob public access, encryption at rest, network rules
  • Database — Azure SQL TDE, Cosmos DB security, PostgreSQL / MySQL flexible server security
  • Network — NSGs, Azure Firewall, Private Endpoints, Private Link, ExpressRoute
  • Key Vault — access policies, RBAC mode, soft-delete and purge protection, certificate auto-rotation
  • Monitoring — Activity Log, Defender for Cloud secure score, Log Analytics retention, Sentinel coverage

GCP-specific methodology

GCP engagements use ScoutSuite, GCPBucketBrute, the Forseti Security framework, and GCP’s own Recommender API:

  • Identity and Access Management — IAM roles, service accounts (the most-common GCP attack vector), default service-account usage, custom roles, organisation policy, VPC Service Controls
  • Cloud Storage — bucket-level IAM vs ACL, public access prevention, uniform bucket-level access, encryption
  • Compute — GCE instance security, default network configurations, OS Login enforcement, GKE cluster security, Cloud Run service security, Cloud Functions security
  • Database — Cloud SQL security, Spanner, Firestore rules, BigQuery dataset access
  • Network — VPC firewall rules, Cloud NAT, Cloud Armor, private connectivity
  • Logging and monitoring — Cloud Audit Logs (Admin Activity, Data Access, System Event), Log sinks, Security Command Center coverage

Frameworks: CIS, NIST, ISO 27017, CSA STAR

Findings are mapped to the frameworks your auditors and buyers reference:

  • CIS Foundations Benchmarks — primary framework, control-by-control coverage
  • NIST SP 800-53 r5 — federal baseline, mapped per finding
  • NIST CSF 2.0 — high-level functional summary for executive consumption
  • ISO 27017 — cloud-specific extension to ISO 27001, control-by-control mapping
  • ISO 27018 — cloud privacy extension, relevant for DPDP / GDPR alignment
  • CSA Cloud Controls Matrix (CCM) v4.0 — STAR-aligned
  • SOC 2 Common Criteria — for SOC 2 readiness evidence
  • AWS / Azure / GCP Well-Architected Framework Security Pillar

IAM graph analysis — privilege escalation paths

The most-impactful findings in a cloud assessment are rarely individual misconfigurations — they are chains. An engineer has IAM permissions A and B; A is innocuous, B is innocuous, but A combined with B allows privilege escalation to administrator. Graph analysis is what surfaces these chains.

For AWS, we use PMapper to construct a directed graph of IAM principals and the actions they can take, then query the graph for paths between low-privilege starting points and high-privilege endpoints. Specific dangerous combinations we surface include: iam:PassRole + lambda:CreateFunction + lambda:InvokeFunction (assume any role via a Lambda); iam:CreateAccessKey for any user (impersonate any user with an access key); sts:AssumeRole into administrator roles where the role’s trust policy is over-broad; cloudformation:CreateStack with a powerful service role; and many others.

For Azure, BloodHound (with the Azure data collection module) graphs Entra ID and subscription RBAC. Specific chains we look for include: Application Administrator + Service Principal owner allowing privilege escalation; Cloud Application Administrator; Privileged Authentication Administrator with the right combination of conditional-access exceptions.

For GCP, we use a combination of IAM Recommender and custom queries to identify service-account paths. The default Compute Engine service account (which has Editor role on the project unless explicitly disabled) is the most-common attack starting point.

Kubernetes and container security

Kubernetes adds its own attack surface on top of the cloud. Our engagement covers managed clusters (EKS, AKS, GKE) and self-hosted clusters at the same depth.

Specific test areas: cluster RBAC graph analysis, namespace isolation, network policies, Pod Security Standards (or PSPs in legacy clusters), workload security context, container image vulnerabilities (we scan with Trivy and Grype), runtime security (eBPF-based monitoring where deployed), service-mesh configuration (Istio, Linkerd), ingress controller security, secrets management (native vs External Secrets Operator vs Sealed Secrets vs Vault), and the build-time pipeline security (image-signing, admission controllers).

For self-hosted clusters, we run kube-bench against the CIS Kubernetes benchmark and kube-hunter for known vulnerability scanning. Findings are reported in the same document as cloud-native findings.

Pricing in INR

Tier 1 · Single cloud
AWS or Azure or GCP
₹3,50,000+ GST
  • One cloud provider, up to 5 accounts
  • CIS + NIST mapping
  • 3-week engagement
  • One retest cycle
Tier 3 · Continuous
Quarterly Retainer
₹2,80,000/ quarter + GST
  • Daily automated drift detection
  • Quarterly manual review
  • Slack-channel access
  • Priority IR triage if cloud-related incident

Twenty-five common findings

  1. S3 / Blob / Cloud Storage public access
  2. EBS / managed disk encryption disabled
  3. RDS / Azure SQL public accessibility
  4. IAM users with AdministratorAccess
  5. Long-lived access keys (no rotation)
  6. Missing MFA on privileged accounts
  7. Default VPC / network without segmentation
  8. Security groups with 0.0.0.0/0 on sensitive ports (22, 3389, 1433, 5432, 6379, 27017, 9200)
  9. CloudTrail / Activity Log not multi-region or not protected
  10. KMS keys without rotation
  11. Root account usage
  12. Service accounts with Editor / Owner on project
  13. iam:PassRole over-permissive combinations
  14. Lambda environment variables containing secrets in plaintext
  15. Unencrypted RDS snapshots shared publicly
  16. Public AMIs sharing
  17. SCPs not deployed at organisation root
  18. Conditional Access policies missing for legacy authentication
  19. Legacy-protocol authentication (SMTP, IMAP, POP3, MAPI) not blocked
  20. Storage Account access keys not rotated
  21. Container images with critical CVEs in production
  22. Kubernetes RBAC granting cluster-admin too broadly
  23. Pods running as root
  24. Missing pod security admission
  25. Service mesh mTLS not enforced

Industry-specific cloud security applications in Bangalore

Cloud security is not a uniform discipline. The control set you need depends substantially on your industry, your buyer base, and the regulators you answer to. Below is the application of our methodology to the five Bangalore industry verticals we deliver into most often. Each vertical has a different threat model, a different framework-mapping priority, and a different deliverable expectation from the auditor.

BFSI — Banks, NBFCs, and payment aggregators

Cloud security for BFSI in Bangalore operates inside RBI’s outsourcing master direction, RBI’s 2023 cloud guidance, and the data-localisation expectation under the Payments Data Storage circular of 2018. The control set extends beyond CIS benchmarks to include: data-localisation evidence (every byte of payment data resident in India), encryption-key custody (keys held in HSMs under the entity’s control, not the cloud provider’s default KMS), audit-trail integrity (CloudTrail / Activity Log integrity protected and submitted to a separate account / subscription / project the engineering team cannot modify), and disaster-recovery testing with documented evidence of successful failover tests within the last 12 months. Our BFSI engagements add roughly 18 controls beyond standard scope and the report includes a separate annexure mapped to RBI’s outsourcing requirements. Bangalore BFSI clients we deliver into routinely include digital-banking BUs of the major private-sector banks, several mid-tier NBFCs, and a meaningful portion of India’s payment-aggregator population.

Fintech — Lending, wealth, insurtech

Bangalore is the centre of mass for Indian fintech, and the regulatory overlay differs by sub-segment. Lending fintechs (RBI-regulated NBFC-D2C, P2P, account-aggregator-integrated platforms) have specific data-localisation, customer-due-diligence, and grievance-redressal cloud-control implications. Wealth-management fintechs operating under SEBI registration follow the SEBI CSCRF expectations; see our SEBI CSCRF page. Insurtech (IRDAI-regulated) follows IRDAI’s information-and-cyber security guidelines. Our fintech engagements scope the regulatory overlay first and design the cloud control set against it; the result is a deliverable that is acceptable for the relevant regulator and reduces the probability of supervisory follow-up.

HealthTech — Telemedicine, diagnostics, EHR

HealthTech in Bangalore typically operates inside the Telemedicine Practice Guidelines (March 2020), the Digital Information Security in Healthcare Act framework (where applicable), the National Digital Health Mission’s ABDM data-protection expectations, and increasingly DPDP Act obligations specifically for sensitive health data. Cloud security here adds: PHI segregation in dedicated VPCs / VNets / VPCs, double-key encryption with the second key held by the entity’s clinical-data officer, audit-logging at the field-access level (who read this patient record, when, from which IP), and India-data-residency for clinical records absent specific patient consent for cross-border transfer. We have delivered cloud assessments for the major Bangalore telemedicine platforms, several diagnostic-tech players, and a number of B2B health-data infrastructure firms.

SaaS — B2B exporters and consumer products

The largest single category of Bangalore cloud-assessment engagements. The threat model is buyer-driven: the engagement’s purpose is to satisfy enterprise procurement’s vendor-security review and to feed evidence into SOC 2 / ISO 27001 audit cycles. Control set is CIS-benchmark-led with NIST 800-53 mapping for SOC 2; we add tenancy-isolation review (multi-tenant SaaS specifically) covering data-segregation across customer accounts, IAM-graph analysis with cross-tenancy escalation paths, and exfiltration-detection control coverage. The report sits in your buyer-facing security pack alongside your SOC 2 report and your VAPT report.

ITeS / BPO / KPO

Bangalore’s IT-enabled services industry has historically been ahead on traditional security but has been slower to adapt to cloud-native control sets. The buyer-side expectation here is also unusual — many ITeS engagements are governed by the customer’s security policy rather than an industry framework, which means our engagement starts with reviewing your customer agreements to identify the contractual control obligations. Cloud security for ITeS adds: customer-data segregation at the cloud-account / subscription level, customer-specific encryption-key custody for high-sensitivity accounts, and audit-trail submission to the customer’s SIEM where contracted. The deliverable is structured so it can be shared with multiple customers under NDA without revealing customer-specific operational detail.

Cloud-native incident patterns we see most often

The patterns below are what an incident-response engagement specifically looks like for cloud-native Bangalore companies, and what our cloud-security assessment specifically tries to prevent. We map them here because the most-effective cloud assessment is one designed around the specific incident archetypes that have actually been observed.

Pattern 1 — Leaked access keys via developer git commit

The most-frequent cloud incident we see. An engineer commits an AWS access key (typically via .env / hardcoded config / accidentally checked-in credentials.json) to a private repo that becomes public via a permission misconfiguration, an open-source contribution, or a third-party CI integration. Detection-time-to-exfiltration in 2026 has dropped to 2–4 minutes — public-repo crawling by adversaries is industrialised. Prevention controls in our assessment: pre-commit secret-scanning enforcement (gitleaks, trufflehog), short-TTL credentials via SSO + IAM Identity Center, no long-lived access keys for human users, GitHub secret-scanning enabled, organisation-wide deny on public-repo creation for engineering-owned organisations.

Pattern 2 — IAM privilege chain via lambda:CreateFunction

Engineer has innocuous IAM permissions individually but the combination iam:PassRole + lambda:CreateFunction + lambda:InvokeFunction grants effective access to assume any role in the account. We surface this in IAM graph analysis and recommend permissions-boundaries to prevent the chain.

Pattern 3 — S3 bucket made public by a typo in CloudFormation / Terraform

Public access prevention not enabled at account level; engineer’s IaC change inadvertently sets BlockPublicAcls = false; bucket is public for 8 hours before drift detection catches it. Prevention: account-level S3 Block Public Access (cannot be overridden), Config rules monitoring continuously, IaC scanning in CI before merge.

Pattern 4 — Lambda environment variable secret exposure via SSRF

Application has an SSRF vulnerability; attacker exploits it to retrieve Lambda environment variables containing a database password or third-party API key. Prevention: Secrets Manager / Parameter Store secure strings rather than environment variables for credentials, IMDSv2 enforcement on EC2, application-level SSRF prevention.

Pattern 5 — Container image vulnerability exploited via known CVE

Production container running a base image that has not been updated; published CVE allows remote code execution; attacker pivots to the cluster network and onwards. Prevention: container image scanning in CI (Trivy, Grype), admission controllers enforcing image-signing and vulnerability thresholds, runtime monitoring (Falco, eBPF-based tools).

Continuous monitoring after the engagement

A point-in-time cloud assessment is the start of cloud security, not the end. Drift between assessments accumulates faster than most internal teams can keep up with — the median Bangalore cloud-native organisation deploys infrastructure changes 3–8 times per week, and each change is an opportunity for posture deterioration. Continuous monitoring is what catches drift in the day it occurs rather than the quarter.

The continuous-monitoring layer we deploy as part of our retainer covers: daily IAM-graph snapshot with diff against baseline (alerts on new privilege paths), daily public-resource scan (S3, Blob, Cloud Storage), daily compliance-rule check (CIS benchmark drift), continuous CloudTrail / Activity Log analytics for anomalous API calls (root usage, bulk credential creation, SCP modifications, organisation changes), monthly IAM-rotation review, and quarterly architecture review with our senior cloud-security lead.

Most clients integrate the monitoring output into their existing SIEM (Splunk, Sentinel, Sumo, Elastic) so the cloud signal sits alongside endpoint, application, and network signal in one operational picture. For clients without a SIEM, we provide a managed dashboard hosted on our infrastructure (Bengaluru, ISO 27001 certified ourselves, all data resident in Bharat).

Cloud security inside SOC 2 and ISO 27001 audits

For Bangalore companies pursuing SOC 2 or ISO 27001 (which is most of the market we serve), the cloud security assessment is also evidence for the framework audit. Our engagement is structured to produce two artifacts simultaneously: the cloud-security report (technical and operational), and an audit-evidence pack mapped to SOC 2 Common Criteria + ISO 27001 Annex A controls. The audit-evidence pack is what your CPA or certification body sees during fieldwork; the technical report is what your security team uses for remediation. Same engagement, two outputs, no double-spend.

The mapping we use: ISO 27017 (cloud-specific extension to ISO 27001) covers most of the cloud-relevant control objectives for an ISO 27001 audit. SOC 2 Common Criteria has roughly 22 controls that touch cloud configuration directly. NIST 800-53 r5 is the federal baseline used for cross-referencing. CSA Cloud Controls Matrix (CCM) v4.0 is used for STAR-aligned engagements. Our deliverable cross-references all four mappings for each finding, so the auditor can trace the control evidence regardless of the framework they are auditing against.

To start a cloud security assessment, the next step is a thirty-minute scoping call. We deploy a read-only IAM role / service principal in your environment on day one of the engagement; testing begins within five business days.

Frequently asked

Frequently asked questions

All three of the major hyperscalers, plus OCI and DigitalOcean. Most Bangalore engagements are AWS-primary (around 70%) or Azure-primary (around 20%) with GCP, OCI, or DO as secondary environments. Multi-cloud engagements are common for enterprises and we cover the full stack in a single SOW.
CIS AWS Foundations Benchmark v3.0, CIS Azure Foundations Benchmark v2.1, CIS GCP Foundations Benchmark v2.0, plus the workload-specific CIS benchmarks where applicable (CIS Kubernetes v1.9, CIS Docker v1.6, CIS PostgreSQL, CIS Linux). Each finding in our report references the specific CIS control number and a NIST 800-53 control mapping for SOC 2 / ISO 27001 evidence.
Yes, on every engagement. We use a combination of in-house tooling and open-source frameworks (PMapper for AWS, BloodHound for Azure AD, IAM Vulnerable for permission-set fuzzing) to graph the privilege relationships in your environment and identify chains where a low-privilege identity can escalate to higher privilege. The danger combinations (iam:PassRole + lambda:CreateFunction, sts:AssumeRole into administrative roles, dangerous identity-based policies on shared services) are surfaced explicitly in the report.
No. Our methodology is read-only by default — we use API-level enumeration through your AWS / Azure / GCP control planes, not exploitation against running workloads. For exploitation work (e.g. demonstrating a privilege escalation chain), we coordinate the test in your staging or dev account, document the proof of concept, and recommend remediation without ever running it in production. We have not caused a service-affecting incident in 200+ cloud assessments.
Yes — included if your environment uses managed Kubernetes (EKS, AKS, GKE) or self-hosted clusters. We test against the CIS Kubernetes benchmark, run kube-bench and kube-hunter, review the RBAC graph, examine network policies, audit pod security standards, and review the container image build pipeline. Container images are scanned for vulnerabilities and misconfigurations.
Multi-account is now the norm — every Bangalore SaaS company we assess in 2026 runs at least 3–8 accounts (separate dev / staging / prod / shared services / security tooling). We engage at the organisation / management group level, enumerate all accounts in scope, and produce per-account findings plus organisation-wide findings (e.g. SCP gaps, cross-account trust risks, organisation-wide audit trail gaps).
Yes. The report maps findings to SOC 2 Common Criteria controls and ISO 27001:2022 Annex A controls. Cloud-specific controls (ISO 27017, ISO 27018, CSA STAR Level 1) are covered as part of the methodology and surfaced in the deliverable for buyer-readiness purposes.
Quarterly is the recommendation for high-velocity environments (where you have engineers shipping cloud-resource changes weekly). Half-yearly is fine for stable environments. We offer a continuous-monitoring retainer (₹2,80,000/quarter) that runs automated checks daily and a quarterly manual review on top — most Bangalore SaaS companies after their first SOC 2 engagement keep us on this retainer.
Yes, on request and with explicit authorisation. Most cloud-provider terms of service permit penetration testing of your own resources without prior approval (AWS revoked its pre-approval requirement in 2019; Azure followed in 2021; GCP allows testing of customer resources). We do exploitation in staging, document with reproduction steps and screenshots, and recommend remediation without running exploits against production.
Yes. Serverless adds specific attack surfaces (function-event-source injection, function-IAM permission scoping, environment-variable exposure of secrets, shared-runtime side channels in Lambda layers, cold-start state leakage). Our methodology covers the serverless layer specifically; it is included in the standard scope rather than a separate engagement.
Ready to scope this engagement?

Book a thirty-minute scoping call.

Tell us your framework, your stack and the deadline. You leave the call with a written scope, a fixed price in INR, and a kick-off invite.