Feeding AWS Security Logs into Microsoft Sentinel Across a Control Tower Landing Zone

TL;DR: Sentinel's S3 connector needs one SQS queue per log source — that constraint drives the whole design. Cross-account S3→SQS notifications silently fail without explicit queue policies, and the OIDC trust has hard-validated naming rules that Microsoft buries in a footnote.
The Setup
A client runs AWS inside a Control Tower landing zone. Their SOC runs on Microsoft Sentinel. Every security event on the AWS side needs to get to Sentinel.
The platform spans five accounts: Management, Network, Security/Audit, Shared Services, and Logging. Control Tower's Log Archive already aggregates CloudTrail and Config, but that's API activity only. The SOC also needs VPC Flow Logs, Network Firewall alert and flow logs, WAF logs, and GuardDuty findings — each coming from whichever account runs the generating service.
Sentinel's S3 data connector reads objects from S3 and uses SQS event notifications to discover new data. So the job is: get all of these sources into S3 in the Logging account, wire up SQS, and configure the OIDC trust so Sentinel can pull from it.
S3 Direct Delivery
Every source in this stack supports S3 as a native log destination. No Kinesis, no Lambda, no intermediate streaming layer. Each gets an S3 delivery resource pointing at the central Sentinel bucket in the Logging account. If you already ship these logs to CloudWatch for operational dashboards, this doesn't interfere.
WAF has a naming constraint: it only delivers to buckets whose name starts with aws-waf-logs-. It also writes at 5-minute intervals rather than continuously. The latency is fine for SIEM ingestion, but get the bucket name wrong and the logging configuration fails with nothing useful in the error.
# VPC Flow Logs — S3 direct delivery to Logging account
resource "aws_flow_log" "sentinel" {
vpc_id = aws_vpc.main.id
traffic_type = "ALL"
log_destination_type = "s3"
log_destination = "${var.sentinel_bucket_arn}/vpc-flow-logs/"
log_format = "$${version} $${account-id} $${interface-id} $${srcaddr} $${dstaddr} $${srcport} $${dstport} $${protocol} $${packets} $${bytes} $${start} $${end} $${action} $${log-status}"
destination_options {
file_format = "parquet"
per_hour_partition = true
}
}The bucket policy in the Logging account needs two statements for delivery.logs.amazonaws.com: an s3:PutObject grant and an s3:GetBucketAcl check. Both conditioned on aws:SourceAccount and aws:SourceArn. The PutObject statement also needs s3:x-amz-acl set to bucket-owner-full-control. Miss any of these and the delivery silently fails — no error, no log entry, nothing to debug.
GuardDuty doesn't follow this pattern. Findings are exported via aws_guardduty_publishing_destination, which writes from the delegated administrator account using guardduty.amazonaws.com — a different service principal. It also needs its own KMS key grant and a separate bucket policy statement. If you're templating your bucket policies across sources, this is the one that'll break.
One Queue Per Source
Sentinel's S3 connector maps each SQS queue to a specific analytics table. VPC Flow Logs go to AWSVPCFlow, GuardDuty findings to AWSGuardDuty, and so on. One SQS queue per log source. Not one per bucket, not one shared queue. This constraint drives the bucket layout.
A single consolidated bucket in the Logging account with prefix-based separation, and one SQS queue per prefix in the Security account:
s3://sentinel-logs-{logging_account_id}/
├── guardduty/
├── vpc-flow-logs/
├── network-firewall/
│ ├── alert/
│ └── flow/
└── waf/S3 event notifications route each prefix to its corresponding queue:
resource "aws_s3_bucket_notification" "sentinel" {
provider = aws.logging
bucket = aws_s3_bucket.sentinel_logs.id
dynamic "queue" {
for_each = {
guardduty = "guardduty/"
vpc-flow-logs = "vpc-flow-logs/"
nfw = "network-firewall/"
waf = "waf/"
}
content {
id = queue.key
queue_arn = aws_sqs_queue.sentinel_queues[queue.key].arn
events = ["s3:ObjectCreated:*"]
filter_prefix = queue.value
}
}
}The queues sit in the Security account — consumption infrastructure separated from storage, with Security acting as the SIEM integration point. CloudTrail is the exception, covered below.
Because the bucket and queues are in different accounts, each queue needs a resource policy allowing s3.amazonaws.com to send messages. Without it, S3 event notifications are silently dropped. Nothing on the S3 side, no dead letter, no indication anything is wrong. This was the most time-consuming failure to track down because there's zero feedback when it happens.
resource "aws_sqs_queue_policy" "sentinel" {
for_each = aws_sqs_queue.sentinel_queues
queue_url = each.value.url
policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = { Service = "s3.amazonaws.com" }
Action = "sqs:SendMessage"
Resource = each.value.arn
Condition = {
ArnEquals = { "aws:SourceArn" = aws_s3_bucket.sentinel_logs.arn }
StringEquals = { "aws:SourceAccount" = var.logging_account_id }
}
}]
})
}One bucket with prefix separation rather than a bucket per source. Separate buckets means five or more, each needing its own KMS key, lifecycle policy, public access block, and versioning config — and any encryption change ripples across all of them. Prefixes give you the same separation without managing five sets of bucket infrastructure.
CloudTrail stays in the Control Tower-managed bucket. Moving it risks breaking CT's integrity chain. The cleanest approach is keeping the entire CloudTrail Sentinel integration within the Logging account: its own OIDC provider, IAM role (OIDC_<prefix>-sentinel-cloudtrail), and SQS queue, all co-located. The CT bucket policy gets a read grant for the new role. Everything else stays as Control Tower left it.
The OIDC Trust
Sentinel authenticates to AWS using OIDC federation, but not as your Entra ID tenant. It uses a Microsoft-owned service principal with a fixed tenant ID and audience URI. Same values for every Sentinel customer.
locals {
# Microsoft's service tenant and audience — same for every customer
sentinel_oidc_provider_url = "sts.windows.net/33e01921-4d64-4f8c-a055-5bdaffd5e33d/"
sentinel_audience = "api://1462b192-27f7-4cb9-8523-0f4ecb54b47e"
}
resource "aws_iam_openid_connect_provider" "sentinel" {
url = "https://${local.sentinel_oidc_provider_url}"
client_id_list = [local.sentinel_audience]
thumbprint_list = ["626d44e704d1ceabe3bf0d53397464ac8080142c"]
}The trust policy pins the session name to MicrosoftSentinel_{workspace_id}, scoping access to your specific workspace rather than any Microsoft service sharing the same OIDC provider:
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = {
Federated = aws_iam_openid_connect_provider.sentinel.arn
}
Action = "sts:AssumeRoleWithWebIdentity"
Condition = {
StringEquals = {
"${local.sentinel_oidc_provider_url}:aud" = local.sentinel_audience
"sts:RoleSessionName" = "MicrosoftSentinel_${var.sentinel_workspace_id}"
}
}
}]
})Two naming constraints that cost me time. The IAM role name must start with OIDC_. The session name condition must start with MicrosoftSentinel_. These are hard-validated by the Sentinel connector — get either wrong and the connector fails to initialise with an authentication error that tells you nothing useful. Microsoft documents this in a footnote rather than anywhere prominent. A callout box would save everyone a couple of hours.
Monitoring
Each SQS queue gets a CloudWatch alarm on ApproximateAgeOfOldestMessage. If the oldest message exceeds your ingestion SLA (we used 5 minutes), the alarm fires. This catches ingestion stalls — if Sentinel stops polling, message age climbs fast.
Each dead letter queue gets a separate alarm that fires on any visible message. Messages move to the DLQ after 5 failed receive attempts rather than being dropped. DLQ activity means something different from a stall: Sentinel is receiving messages but failing to process them.
Once everything is wired up, test the full path. Drop a test object into the bucket under a monitored prefix, confirm a message appears in the corresponding SQS queue within seconds, then query the Sentinel table (AWSVPCFlow | take 10) to confirm it actually made it to the other side.
Solution Architect with 30 years in cloud infrastructure, security, identity, and .NET engineering.