module-base-s3-bucket

This Terraform module deploys an AWS S3 Bucket with different configurations detailed in the examples below

This module provides the possibility to enable or create the s3 inventory using the inventory_configuration variable. This variable can contain the following parameters:
enabled                  - (Optional, Default: true) Specifies whether the inventory is enabled or disabled.
included_object_versions - (Required) Object versions to include in the inventory list. Valid values: All, Current. Default Value: All
optional_fields          - (Optional) List of optional fields that are included in the inventory results. Please refer to the S3 documentation: https://docs.aws.amazon.com/AmazonS3/latest/API/API_InventoryConfiguration.html#AmazonS3-Type-InventoryConfiguration-OptionalFields for more details. Default value: ["Size", "LastModifiedDate", "StorageClass"]

schedule - (Required) Specifies the schedule for generating inventory results (documented below).
     frequency - (Required) Specifies how frequently inventory results are produced. Valid values: Daily, Weekly. Defaukt Value: Weekly

filter - (Optional) Specifies an inventory filter. The inventory only includes objects that meet the filter's criteria (documented below).
     prefix - (Optional) Prefix that an object must have to be included in the inventory results.

destination - (Required) Contains information about where to publish the inventory results (documented below).
     bucket - (Required) The name of the bucket where inventory results are published.
         bucket_arn - (Required) Amazon S3 bucket ARN of the destination.
         format     - (Required) Specifies the inventory result format. Valid values: CSV, ORC or Parquet.
         prefix     - (Optional) Specifies the destination path to inventory results within the destination bucket.
         account_id - (Optional) ID of the account that owns the destination bucket. Recommended to be set to prevent problems if the destination bucket ownership changes.
         encryption - (Optional) Contains the type of server-side encryption to use to encrypt the inventory (documented below).
         sse_kms    - (Optional) Specifies to use server-side encryption with AWS KMS-managed keys to encrypt the inventory file (documented below).
This module is configured so that by default the Object Ownership property is configured with the value Bucket owner enforced so that the ACL does not have to be configured and is deactivated as Amazon recommends. If you want to change this property you must modify the value of the object_ownership variable with the corresponding value
Lifecycle policies can be implemented with two formats when filling in the variable or parameter lifecycle_rule, since the variable declared as any, admits json or map type format. Example of lifecycle implementation or deployment, recommended json format
All lifecycles are displayed by default with status = "Enabled", if you want to modify disable the lifecycle you must set the status = "Disabled" in each that you want disable
Regarding the lifecycle structure configuration, the name of lifecycle is the key. It´s recommend that you have defines a descrptive name for the rule to distinguished his function. You can see the options in the examples below.

Usage Lifecycle

Disabled status of lifecycle with description name

lifecycle_rule = {
  "Transition to deep_archive in 1 day" = {
    status = "Disabled"
    filter = {
      prefix = "topics/rrt/"
    }
    transition = {
      days = 1
      storage_class = "DEEP_ARCHIVE"
    }
    noncurrent_version_transition = {
      noncurrent_days = 1
      storage_class   = "DEEP_ARCHIVE"
    }
  }
}

Lifecycle in the same module with lifecycle_rule variable in map format

lifecycle_rule = {
  transition_to_deep_archive = {
    filter = {
      prefix = "topics/rrt/"
    }
    transition = {
      days = 1
      storage_class = "DEEP_ARCHIVE"
    }
    noncurrent_version_transition = {
      noncurrent_days = 1
      storage_class = "DEEP_ARCHIVE"
    }
  }
  expire-rrtf = {
    filter = {
      prefix = "topics/resiber.rrtf.raw.0/"
    }
    expiration = {
      days = 1
    }
    noncurrent_version_expiration = {
      noncurrent_days = 1
    }
  }
  expire-rrtp = {
    filter = {
      prefix = "topics/resiber.rrtp.raw.0/"
    }
    expiration = {
      days = 1
    }
    noncurrent_version_expiration = {
      noncurrent_days = 1
    }
  }
  expire-rrtk = {
    filter = {
      prefix = "topics/resiber.rrtk.raw.0/"
    }
    expiration = {
      days : 1
    }
    noncurrent_version_expiration = {
      noncurrent_days = 1
    }
  }
  expire-rrtg = {
    filter = {
      prefix = "topics/resiber.rrtg.raw.0/"
    }
    expiration = {
      days = 1
    }
    noncurrent_version_expiration = {
      noncurrent_days = 1
    }
  }
  transition_to_glacier_ibis_availability_offers = {
    filter = {
      prefix = "topics/ibis_availability_offers/"
    }
    transition = {
      days          = 7
      storage_class = "DEEP_ARCHIVE"
    }
    expiration = {
      days = 365
    }
  }
}

Lifecycle in the same module with lifecycle_rule file json format

{
   "transition_to_deep_archive": {
       "filter": {
           "prefix": "topics/rrt/"
       },
       "transition": {
           "days": 1,
           "storage_class": "DEEP_ARCHIVE"
       },
       "noncurrent_version_transition": {
           "noncurrent_days": 1,
           "storage_class": "DEEP_ARCHIVE"
       }
   },
   "expire-rrtf": {
       "filter": {
           "prefix": "topics/resiber.rrtf.raw.0/"
       },
       "expiration": {
           "days": 1
       },
       "noncurrent_version_expiration": {
           "noncurrent_days": 1
       }
   },
   "expire-rrtp": {
       "filter": {
           "prefix": "topics/resiber.rrtp.raw.0/"
       },
       "expiration": {
           "days": 1
       },
       "noncurrent_version_expiration": {
           "noncurrent_days": 1
       }
   },
   "expire-rrtk": {
       "filter": {
           "prefix": "topics/resiber.rrtk.raw.0/"
       },
       "expiration": {
           "days": 1
       },
       "noncurrent_version_expiration": {
           "noncurrent_days": 1
       }
   },
   "expire-rrtg": {
       "filter": {
           "prefix": "topics/resiber.rrtg.raw.0/"
       },
       "expiration": {
           "days": 1
       },
       "noncurrent_version_expiration": {
           "noncurrent_days": 1
       }
   },
   "transition_to_glacier_ibis_availability_offers": {
       "filter": {
           "prefix": "topics/ibis_availability_offers/"
       },
       "transition": {
           "days": 7,
           "storage_class": "DEEP_ARCHIVE"
       },
       "expiration": {
           "days": 365
       }
   }
}

Usage example

Basic S3 Bucket

module "s3" {
 source = "git::https://gitlab.vectoritcgroup.com/vectordigital/iac/terraform/modules/aws/storage/module-base-s3-bucket.git?ref=vX.Y.Z"

 aws_region = var.aws_region
 prefix     = var.prefix

 name = "bucket-grafana-logs"

 tags = {
   "Project"     = "test"
   "Environment" = "basic"
 }
}

Basic S3 Bucket with json type lifecycle expiration days and transition and configuration inventory

module "s3" {
 source = "git::https://gitlab.vectoritcgroup.com/vectordigital/iac/terraform/modules/aws/storage/module-base-s3-bucket.git?ref=vX.Y.Z"

 aws_region = var.aws_region
 prefix     = var.prefix

 name = "bucket-grafana-logs"

 lifecycle_rule = {
  "test": {
      "filter" : {
         "prefix": "test/"
      },
      "expiration": {
          "days": 31
      }
  },
  "test2": {
      "filter" : {
         "prefix": "test2/"
      },
      "transition": {
          "days": 31,
          "storage_class": "STANDARD_IA"
      },
      "expiration": {
          "days": 60
      }
  }
 }

 inventory_configuration = {
   "default" = {
     destination = {
       bucket = {
         bucket_arn = module.s3_log.arn
         prefix     = "default/"
       }
     }
   }
   "without-filter" = {
     frequency                = "Daily"
     optional_fields          = []
     included_object_versions = "Current"
     destination = {
       bucket = {
         account_id = data.aws_caller_identity.current.account_id
         bucket_arn = module.s3_log.arn
         format     = "ORC"
         prefix     = "inventory/"
       }
     }
   }
   disabled = {
     enabled = false
     filter = {
       prefix = "disabled/"
     }
     destination = {
       bucket = {
         account_id = data.aws_caller_identity.current.account_id
         bucket_arn = module.s3_log.arn
         format     = "CSV"
         prefix     = "inventory_disabled/"
       }
     }
   }
 }

 tags = {
   "Project"     = "test"
   "Environment" = "basic"
 }
}

Basic S3 Bucket with map type lifecycle expiration days and transition

module "s3" {
 source = "git::https://gitlab.vectoritcgroup.com/vectordigital/iac/terraform/modules/aws/storage/module-base-s3-bucket.git?ref=vX.Y.Z"

 aws_region = var.aws_region
 prefix     = var.prefix

 name = "bucket-grafana-logs"

 lifecycle_rule = {
   test = {
     filter = {
       prefix = "test/"
     },
     expiration = {
       days = 31
     }
   }
   test2 = {
     filter = {
       prefix = "test2/"
     },
     transition = {
       days          = 31
       storage_class = "STANDARD_IA"
     },
     expiration = {
       days = 60
     }
   }
 }

 tags = {
   "Project"     = "test"
   "Environment" = "basic"
 }
}

Basic S3 bucket with life cycle expiration days and transition, in json format in lifecycle.json file

module "s3" {
 source = "git::https://gitlab.vectoritcgroup.com/vectordigital/iac/terraform/modules/aws/storage/module-base-s3-bucket.git?ref=vX.Y.Z"

 aws_region = var.aws_region
 prefix     = var.prefix

 name = "bucket-grafana-logs"

 lifecycle_rule = jsondecode(file("lifecycle.json"))

 tags = {
   "Project"     = "test"
   "Environment" = "basic"
 }
}

Private S3 Bucket with lifecycle example to versioning expiration days and nocurrent version delete

module "s3" {
 source = "git::https://gitlab.vectoritcgroup.com/vectordigital/iac/terraform/modules/aws/storage/module-base-s3-bucket.git?ref=vX.Y.Z"

 aws_region = var.aws_region
 prefix     = var.prefix

 name = "bucket-grafana-logs"

 versioning_enabled = true

 lifecycle_rule = {
  "transition_to_deep_archive": {
      "filter": {
          "prefix": "topics/rrt/"
      },
      "transition": {
          "days": 1,
          "storage_class": "DEEP_ARCHIVE"
      },
      "noncurrent_version_transition": {
          "noncurrent_days": 1,
          "storage_class": "DEEP_ARCHIVE"
      }
  },
  "expire-rrtf": {
      "filter": {
          "prefix": "topics/resiber.rrtf.raw.0/"
      },
      "expiration": {
          "days": 1
      },
      "noncurrent_version_expiration": {
          "noncurrent_days": 1
      }
  },
  "expire-rrtp": {
      "filter": {
          "prefix": "topics/resiber.rrtp.raw.0/"
      },
      "expiration": {
          "days": 1
      },
      "noncurrent_version_expiration": {
          "noncurrent_days": 1
      }
  },
  "expire-rrtk": {
      "filter": {
          "prefix": "topics/resiber.rrtk.raw.0/"
      },
      "expiration": {
          "days": 1
      },
      "noncurrent_version_expiration": {
          "noncurrent_days": 1
      }
  },
  "expire-rrtg": {
      "filter": {
          "prefix": "topics/resiber.rrtg.raw.0/"
      },
      "expiration": {
          "days": 1
      },
      "noncurrent_version_expiration": {
          "noncurrent_days": 1
      }
  },
  "transition_to_glacier_ibis_availability_offers": {
      "filter": {
          "prefix": "topics/ibis_availability_offers/"
      },
      "transition": {
          "days": 7,
          "storage_class": "DEEP_ARCHIVE"
      },
      "expiration": {
          "days": 365
      }
  }
 }

 tags = {
   "Project"     = "test"
   "Environment" = "basic"
 }
}

Private S3 Bucket with access logging enabled and versioning and MFA activate

module "s3" {
 source = "git::https://gitlab.vectoritcgroup.com/vectordigital/iac/terraform/modules/aws/storage/module-base-s3-bucket.git?ref=vX.Y.Z"

 aws_region = var.aws_region
 prefix     = var.prefix

 name = "bucket-grafana-logs"

 versioning_enabled    = true
 versioning_mfa_delete = "Enabled"

 logging_enabled       = true
 logging_target_bucket = module.s3_log.id
 logging_target_prefix = "log/"

 tags = {
   "Project"     = "test"
   "Environment" = "basic"
 }
}

Private S3 Bucket with access logging enabled, versioning and specific public access block configuration and intelligent tiering configuration

module "s3" {
 source = "git::https://gitlab.vectoritcgroup.com/vectordigital/iac/terraform/modules/aws/storage/module-base-s3-bucket.git?ref=vX.Y.Z"

 aws_region = var.aws_region
 prefix     = var.prefix

 name = "bucket-grafana-logs"

 versioning_enabled = true

 intelligent_tiering_configuration = {
   docs = {
     status = "Enabled"
     filter = {
       prefix = "docs/"
       tags = {
         "Project" = "test"
       }
     },
     tiering = [
       {
         access_tier = "DEEP_ARCHIVE_ACCESS"
         days        = 180
       },
       {
         access_tier = "ARCHIVE_ACCESS"
         days        = 125
       }
     ]
   },
   test = {
     status = "Enabled"
     filter = {
       prefix = "test/"
       tags = {
         "Project" = "test"
       }
     },
     tiering = [{
       access_tier = "ARCHIVE_ACCESS"
       days        = 125
     }]
   }
 }

 logging_enabled       = true
 logging_target_bucket = module.s3_log.id
 logging_target_prefix = "log/"

 tags = {
   "Project"     = "test"
   "Environment" = "basic"
 }
}

Public S3 Bucket with CORS Rule configuration, policy and acceleration enabled

module "s3" {
 source = "git::https://gitlab.vectoritcgroup.com/vectordigital/iac/terraform/modules/aws/storage/module-base-s3-bucket.git?ref=vX.Y.Z"

 aws_region = var.aws_region
 prefix     = var.prefix

 name = "bucket-grafana-logs"

 versioning_enabled = true

 public_access_block = {
   block_public_acls       = false
   ignore_public_acls      = false
   block_public_policy     = false
   restrict_public_buckets = false
 }

 acceleration_status = true

 cors_rule = [
   {
     allowed_headers = ["*"]
     allowed_methods = ["PUT", "POST"]
     allowed_origins = ["https://s3-website-test.hashicorp.com"]
     expose_headers  = ["ETag"]
     max_age_seconds = 3000
   }
 ]

 create_bucket_policy = true
 bucket_policy = <<POLICY
*{
 "Version": "2012-10-17",
 "Id": "MYBUCKETPOLICY",
 "Statement": [
   {
     "Sid": "IPAllow",
     "Effect": "Deny",
     "Principal": "*",
     "Action": "s3:*",
     "Resource": "arn:aws:s3:::${module.s3.name}/*",
     "Condition": {
        "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
     }
   }
 ]
*}
*POLICY

 tags = {
   "Project"     = "test"
   "Environment" = "basic"
 }
}

Public S3 Bucket with configuration complete

module "s3" {
 source = "git::https://gitlab.vectoritcgroup.com/vectordigital/iac/terraform/modules/aws/storage/module-base-s3-bucket.git?ref=vX.Y.Z"

 aws_region = var.aws_region
 prefix     = var.prefix

 name = "bucket-grafana-logs"

 acceleration_status = true

 server_side_encryption_configuration_enabled = true

 versioning_enabled = true

 public_access_block = {
   block_public_acls       = false
   ignore_public_acls      = true
   block_public_policy     = true
   restrict_public_buckets = true
 }

 object_ownership = "ObjectWriter"
 permission_access_control_policy_grantee = [
   {
     permission_canonicaluser             = "FULL_CONTROL"
     permission_everyone                  = ""
     permission_authenticated_users_group = "READ"
     permission_s3_log_delivery_group     = "FULL_CONTROL"
   }
 ]

 cors_rule = [
   {
     allowed_headers = ["*"]
     allowed_methods = ["PUT", "POST"]
     allowed_origins = ["https://s3-website-test.hashicorp.com"]
     expose_headers  = ["ETag"]
     id              = "test"
     max_age_seconds = 3000
   }
 ]

 logging_enabled       = true
 logging_target_bucket = module.s3_log.id
 logging_target_prefix = "log/"

 notification_eventbridge = true

 notification_topic = [
   {
     topic_arn     = aws_sns_topic.topic.arn
     events        = ["s3:ObjectCreated:*"]
     filter_suffix = ".log"
     id            = "test-topic"
   }
 ]

 notification_queue = [
   {
     queue_arn     = aws_sqs_queue.this.arn
     events        = ["s3:ObjectCreated:*"]
     filter_suffix = ".log"
     id            = "test-queue"
   }
 ]

 notification_lambda_function = [
   {
     lambda_function_arn = aws_lambda_function.func.arn
     events              = ["s3:ObjectCreated:*"]
     filter_prefix       = "func/pending_payments/baggages/input"
     id                  = "lambda-1"
   },
   {
     lambda_function_arn = aws_lambda_function.func2.arn
     events              = ["s3:ObjectCreated:*"]
     filter_prefix       = "func2/pending_payments/baggages/input"
     id                  = "lambda-2"
   },
   {
     lambda_function_arn = aws_lambda_function.func3.arn
     events              = ["s3:ObjectCreated:*"]
     filter_prefix       = "func3/pending_payments/baggages/input"
     id                  = "lambda-3"
   }
 ]

 intelligent_tiering_configuration = {
   docs = {
     status = "Enabled"
     filter = {
       prefix = "docs/"
       tags = {
         "Project"     = "test"
         "Environment" = "basic"
       }
     },
     tiering = [
       {
         access_tier = "DEEP_ARCHIVE_ACCESS"
         days        = 180
       },
       {
         access_tier = "ARCHIVE_ACCESS"
         days        = 125
       }
     ]
   },
   test = {
     status = "Enabled"
     filter = {
       prefix = "test/"
       tags = {
         "Project"     = "test"
         "Environment" = "basic"
       }
     },
     tiering = [{
       access_tier = "ARCHIVE_ACCESS"
       days        = 125
     }]
   },
   disable = {
     status = "Disabled"
     filter = {
       prefix = null
       tags   = {}
     },
     tiering = [{
       access_tier = "ARCHIVE_ACCESS"
       days        = 125
     }]
   }
 }

 lifecycle_rule = {
  "test": {
      "filter" : {
          "prefix" : "test/"
      },
      "expiration": {
          "days": 31
      }
  },
  "test2": {
      "filter" : {
          "prefix" : "test2/"
      },
      "transition": {
          "days": 31,
          "storage_class": "STANDARD_IA"
      },
      "expiration": {
          "days": 60
      }
  }
 }
}

 inventory_configuration = {
   "default" = {
     destination = {
       bucket = {
         bucket_arn = module.s3_log.arn
         prefix     = "default/"
       }
     }
   }
   "without-filter" = {
     frequency                = "Daily"
     optional_fields          = []
     included_object_versions = "Current"
     destination = {
       bucket = {
         account_id = data.aws_caller_identity.current.account_id
         bucket_arn = module.s3_log.arn
         format     = "ORC"
         prefix     = "inventory/"
       }
     }
   }
   disabled = {
     enabled = false
     filter = {
       prefix = "disabled/"
     }
     destination = {
       bucket = {
         account_id = data.aws_caller_identity.current.account_id
         bucket_arn = module.s3_log.arn
         format     = "CSV"
         prefix     = "inventory_disabled/"
       }
     }
   }
 }

 create_bucket_policy = true
 bucket_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Id": "MYBUCKETPOLICY",
  "Statement": [
     {
      "Sid": "IPAllow",
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::${module.s3.name}/*",
      "Condition": {
        "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
       }
     }
   ]
}
POLICY

 tags = {
   "Project"     = "test"
   "Environment" = "complete"
   }
 }

Module argument reference

Modules

No modules.

Inputs

Name Description Type Default Required

Sets the accelerate configuration of an existing bucket. Can be Enabled or Suspended

bool

false

no

The canned ACL to apply. Defaults to private. Valid values are private, public-read, public-read-write, aws-exec-read, authenticated-read, bucket-owner-read, bucket-owner-full-control and log-delivery-write. Conflicts with grant

string

""

no

AWS Region name where the S3 Bucket will be deployed

string

n/a

yes

A valid bucket policy JSON document. Note that if the policy document is not specific enough (but still valid), Terraform may view the policy as constantly changing in a terraform plan. In this case, please make sure you use the verbose/specific version of the policy

string

""

no

(Forces new resource) Creates a unique bucket name beginning with the specified prefix. Conflicts with bucket. Must be lowercase and less than or equal to 37 characters in length. A full list of bucket naming rules may be found here https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html

string

""

no

A list of objects parameters as cors_rule blocks that containers in your task may use

list(object({
    allowed_headers = list(string)
    allowed_methods = list(string)
    allowed_origins = list(string)
    expose_headers  = list(string)
    id              = string
    max_age_seconds = number
  }))

[]

no

If is true, create a custom bucket policy

bool

false

no

(Forces new resource) The account ID of the expected bucket owner, applied for all resources

string

""

no

A boolean that indicates all objects (including any locked objects) should be deleted from the bucket so that the bucket can be destroyed without error. These objects are not recoverable

bool

true

no

list of properties for S3 Intelligent-Tiering storage class tiers of the configuration

map(object({
    status = string // Specifies the status of the configuration. Valid values: Enabled, Disabled.
    filter = object({
      prefix = string      // Object key name prefix that identifies the subset of objects to which the configuration applies
      tags   = map(string) // All of these tags must exist in the object's tag set in order for the configuration to apply
    })
    tiering = list(object({
      access_tier = string // S3 Intelligent-Tiering access tier. Valid values: ARCHIVE_ACCESS, DEEP_ARCHIVE_ACCESS
      days        = number // Number of consecutive days of no access after which an object will be eligible to be transitioned to the corresponding tier.
    }))
  }))

{}

no

map of properties for S3 Inventory configuration

any

{}

no

List of maps containing configuration of object lifecycle management

any

{}

no

Enable logging feature

bool

false

no

The name of the bucket where you want Amazon S3 to store server access logs

string

""

no

Email address of the grantee. See Regions and Endpoints for supported AWS regions where this argument can be specified

string

""

no

The canonical user ID of the grantee

string

""

no

Logging permissions assigned to the grantee for the bucket. Valid values: FULL_CONTROL, READ, WRITE

string

""

no

Type of grantee. Valid values: CanonicalUser, AmazonCustomerByEmail, Group

string

""

no

URI of the grantee group

string

""

no

A prefix for all log object keys

string

""

no

The name that identifies this S3 Bucket, Must be lowercase and less than or equal to 63 characters in length

string

n/a

yes

Whether to enable Amazon EventBridge notifications

bool

false

no

Used to configure notifications to a Lambda Function

list(any)

[]

no

Notification configuration to SQS Queue

list(any)

[]

no

Notification configuration to SNS Topic

list(any)

[]

no

Default:false, Forces new resource) Indicates whether this bucket has an Object Lock configuration enabled. Valid values are true or false

bool

false

no

Object ownership. Valid values: 'BucketOwnerPreferred', 'ObjectWriter' or 'BucketOwnerEnforced'

string

"BucketOwnerEnforced"

no

A list of objects parameters as access_control_policy blocks use, Logging permissions assigned to the grantee for the bucket. Valid values: FULL_CONTROL, WRITE, WRITE_ACP, READ, READ_ACP. or string empty if you do not want to report or configure this permissions part for the ACL object. See What permissions can I grant? 'https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#permissions' for more details about what each permission means in the context of buckets.

list(object({
    permission_canonicaluser             = string // Logging permissions assigned to the grantee for the Canonical User. Valid values: FULL_CONTROL, WRITE, WRITE_ACP, READ, READ_ACP, or "" if you do not want to report or configure this permissions part for the ACL object
    permission_everyone                  = string // Logging permissions assigned to the grantee for Everyone (public access). Valid values: FULL_CONTROL, WRITE, WRITE_ACP, READ, READ_ACP, or "" if you do not want to report or configure this permissions part for the ACL object
    permission_authenticated_users_group = string // Logging permissions assigned to the grantee for Authenticated users group (anyone with an AWS account). Valid values: FULL_CONTROL, WRITE, WRITE_ACP, READ, READ_ACP, or "" if you do not want to report or configure this permissions part for the ACL object
    permission_s3_log_delivery_group     = string // Logging permissions assigned to the grantee for S3 log delivery group. Valid values: FULL_CONTROL, WRITE, WRITE_ACP, READ, READ_ACP, or "" if you do not want to report or configure this permissions part for the ACL object
  }))
[
  {
    "permission_authenticated_users_group": "",
    "permission_canonicaluser": "FULL_CONTROL",
    "permission_everyone": "",
    "permission_s3_log_delivery_group": ""
  }
]

no

The prefix to be attached to every resource name

string

n/a

yes

A list of object to configure block access bucket configuration

object({
    block_public_acls       = bool // "Block Public Access: Block public access to buckets and objects granted through new access control lists (ACLs)"
    ignore_public_acls      = bool // "Block Public Access: Block public access to buckets and objects granted through any access control lists (ACLs)"
    block_public_policy     = bool // "Block Public Access: Block public access to buckets and objects granted through new public bucket or access point policies"
    restrict_public_buckets = bool // "Block Public Access: Block public and cross-account access to buckets and objects through any public bucket or access point policies"
  })
{
  "block_public_acls": true,
  "block_public_policy": true,
  "ignore_public_acls": true,
  "restrict_public_buckets": true
}

no

Enable Server Side Encryption feature

bool

false

no

The AWS KMS master key ID used for the SSE-KMS encryption. This can only be used when you set the value of sse_algorithm as aws:kms. The default aws/s3 AWS KMS master key is used if this element is absent while the sse_algorithm is aws:kms

string

""

no

The server-side encryption algorithm to use. Valid values are AES256 and aws:kms

string

"AES256"

no

Specific tags for all module resources

map(string)

n/a

yes

Enable versioning feature

bool

false

no

(Required if versioning_configuration mfa_delete is enabled) The concatenation of the authentication device’s serial number, a space, and the value that is displayed on your authentication device

string

""

no

Specifies whether MFA delete is enabled in the bucket versioning configuration. Valid values: Enabled or Disabled

string

"Disabled"

no

The versioning state of the bucket. Valid values: Enabled, Suspended, or Disabled. Disabled should only be used when creating or importing resources that correspond to unversioned S3 buckets

string

"Enabled"

no

Outputs

Name Description

arn

The ARN of the bucket. Will be of format

The bucket domain name. Will be of format

id

The name of the bucket

The name of the bucket

This S3 Bucket endpoint custom DNS name

The bucket region-specific domain name. The bucket domain name including the region name, please refer here for format. Note: The AWS CloudFront allows specifying S3 region-specific endpoint when creating S3 origin, it will prevent redirect issues from CloudFront to S3 Origin URL