AIS Buckets: Design and Operations

View as Markdown

A bucket is a named container for objects - monolithic files or chunked representations - with associated metadata.

Buckets are the primary unit of data organization and policy application in AIStore (AIS).

Object metadata includes checksum, version, size, access time, replica/EC placement, unique bucket ID (BID), and custom user-defined attributes. For remote buckets, AIS may also store backend-specific metadata such as ETag, LastModified timestamps, backend version identifiers, and provider checksums when available.

Metadata v2 includes additional flags used by AIS features (for example, chunked object representation).

AIS uses a flat hierarchy: bucket-name/object-name key space. It supports virtual directories through prefix-based naming with recursive and non-recursive operations.

This document is organized in two parts:

  • Part I: Design - covers the bucket abstraction, identity model, namespaces, remote clusters, and backend buckets
  • Part II: How-To - practical operations: working with same-name buckets, prefetch/evict, access control, provider configuration, and CLI reference

Part I: Design

AIS does not treat a bucket as a passive container. A bucket is a logical namespace that AIS materializes lazily (for remote backends), configures dynamically, and manages cluster-wide.

Table of Contents

  1. Motivation
  2. The Bucket
  3. Bucket Lifecycle
  4. Namespaces
  5. Remote AIS Clusters
  6. Backend Buckets
  7. System Buckets

Motivation

The idea is to provide a unified storage abstraction. Instead of maintaining different APIs for in-cluster storage, Cloud providers, other remote backends - AIS exposes everything through a single, consistent bucket abstraction.

The design goals were (and remain):

  • Operational Simplicity: Eliminate “registration overhead.” If a bucket exists in the backend, it should be immediately usable in AIS.
  • Provider Agnostic: The API remains identical whether the data resides on local NVMe drives, a remote AIS cluster, or a public cloud provider.
  • Dynamic Configuration: Buckets are not passive containers; they are logical namespaces where data protection (EC, Mirroring) and caching policies (LRU) are applied dynamically.

Users interact with buckets uniformly, regardless of where they live:

  • Local disks (AIS provider)
  • AWS, GCP, Azure, OCI
  • S3-compatible systems (SwiftStack, Cloudian, MinIO, Oracle OCI, etc.)
  • Other AIS clusters

The provider and namespace differentiate the backend; the API stays the same.

TypeDescriptionExample
AIS BucketNative bucket managed by this cluster (the one addressed by the AIS endpoint that you have)ais://mybucket
Remote AIS BucketBucket in a remote AIS clusterais://@remais/mybucket
Cloud BucketRemote bucket (S3, GCS, Azure, OCI)s3://dataset
Backend BucketAIS bucket linked to a remote bucketais://cache => s3://origin

Creation

Another core design goal was to eliminate boilerplate: if a bucket exists in the remote backend (Cloud, Remote AIS, etc.) and is accessible, AIS makes it immediately usable. Remote buckets are added lazily, on first reference, without a separate creation step.

Explicit creation is supported when additional control is required - credentials, endpoints, namespaces, or properties that must be set before first access.

Further details - in section Bucket Lifecycle below.

Bucket Identity

Once added to BMD, a bucket’s identity becomes cluster-wide and immutable:

Identity = Provider + Namespace + Name

AIS never guesses or rewrites identity. s3://#ns1/bucket and s3://#ns2/bucket are distinct buckets.


The Bucket

┌────────────── Bucket Identity ─────────────┐
│ │
│ ( Provider , Namespace , Name ) │
│ │
└────────────────────────────────────────────┘
┌────────────────────┐
│ Properties │
│ (Bprops) │
└────────────────────┘

Example: S3 bucket with namespace

┌───────────────────────────────────────┐
│ Identity: │
│ • Provider: aws │
│ • Namespace: #prod-account │
│ • Name: logs │
│ │
│ Bprops: │
│ • versioning.enabled = true │
│ • extra.aws.profile = prod │
│ • mirror.copies = 2 │
└───────────────────────────────────────┘

Example: AIS bucket with backend

┌───────────────────────────────────────┐
│ Identity: │
│ • Provider: ais │
│ • Namespace: (global) │
│ • Name: cache │
│ │
│ Bprops: │
│ • backend_bck = s3://source-logs │
│ • lru.enabled = true │
└───────────────────────────────────────┘

See also: Blog: The Many Lives of a Dataset Called “data”

Bucket Name

Bucket names are limited to 64 bytes and may contain only letters, digits, dashes (-), underscores (_), and single dots (.). Consecutive dots (..) are not allowed.

Names that start with . are reserved for system buckets. User-defined buckets therefore cannot use a leading dot, and any unrecognized .-prefixed name is rejected.

Provider

Indicates the storage backend:

ProviderBackend
aisNative AIS bucket
aws or s3Amazon S3 or S3-compatible
gcp or gsGoogle Cloud Storage
azure or ‘az’Azure Blob Storage
oci or `oc’Oracle Cloud Infrastructure

Remote AIS clusters use the ais provider with a namespace referencing the cluster alias or UUID:

ais://@remais//bucket-name
# Or alternative syntax using remote cluster's UUID:
ais://@uuid/bucket-name

Namespace

Namespaces disambiguate buckets that share the same name.

Originally, all cloud buckets had an implicit global namespace. That model breaks when:

  • Different AWS accounts contain same-name buckets
  • S3-compatible endpoints host same-name buckets
  • SwiftStack/Cloudian accounts scope buckets by user

Namespaces fix this:

s3://#account1/images
s3://#account2/images

These resolve to:

1Bck{Provider: "aws", Ns: Ns{Name: "account1"}, Name: "images"}
2Bck{Provider: "aws", Ns: Ns{Name: "account2"}, Name: "images"}

They are independent in every way - separate BMD entries, credentials, and on-disk paths.

Note: The Ns struct has two fields: UUID (for remote AIS clusters) and Name (for logical namespaces). For cloud buckets, namespace identifier (e.g., #prod in s3://#prod/bucket) enables multiple same-name buckets with different credentials or endpoints.

For remote AIS clusters, the namespace additionally carries the cluster’s UUID:

1ais://@remais/bucket => Bck{Provider: "ais", Ns: Ns{UUID: "<cluster-uuid>"}, Name: "bucket"}

Note: The bucket namespace you choose - whether it represents an AWS profile, a GCS account, or simply a human-readable alias - becomes part of the bucket’s physical on-disk path. What starts as a logical identifier materializes into on-disk naming structure.

See also: Blog: The Many Lives of a Dataset Called “data”

Bucket Properties

Bucket properties - stored in BMD, inherited from cluster config, overridable per-bucket - control data protection (checksums, EC, mirroring), chunked representation, versioning and synchronization with remote sources, LRU eviction, rate limiting, access permissions, provider-specific settings, and more.

The properties:

  • are inherited from cluster-wide configuration at bucket creation time;
  • can be overridden at creation time and/or at any time via ais bucket props set or the corresponding Go or Python API;
  • are applied cluster-wide via metasync;
  • include data layout, checksumming, EC/mirroring, LRU, rate limiting, backend linkage, access control, and more.

At the top level:

JSON keyTypeWhat it controls
providerstringBackend provider (ais, aws, gcp, azure, oci, …).
backend_bckBckOptional “backend bucket” AIS proxies to (see Backend Buckets).
write_policyWritePolicyConfWhen/how metadata is persisted (immediate, delayed, never).
checksumCksumConfChecksum algorithm and validation policies for cold/warm GET.
versioningVersionConfVersioning enablement and synchronization with the backend.
mirrorMirrorConfN-way mirroring (on/off, number of copies).
ecECConfErasure coding (data/parity slices, size thresholds).
chunksChunksConfChunked-object layout and multipart-upload behavior.
lruLRUConfLRU caching policy: watermarks, enable/disable.
rate_limitRateLimitConfFrontend and backend rate limiting (bursty/adaptive shaping).
extraExtraPropsProvider-specific: extra.aws.{profile,endpoint,cloud_region} for S3-compatible, extra.gcp.application_creds for GCS, extra.oci.region for OCI.
accessAccessAttrsBucket access mask (GET, PUT, DELETE, etc.).
featuresfeat.FlagsFeature flags to flip assorted defaults (e.g., S3 path-style).
biduint64Unique bucket ID (assigned by AIS, read-only).
createdint64Bucket creation time (Unix timestamp, read-only).
renamedstringDeprecated: non-empty only for buckets that have been renamed.
1# Validate remote version and, possibly, update in-cluster ("cached") copy;
2# delete in-cluster object if its remote counterpart does not exist
3ais create gs://abc --props="versioning.validate_warm_get=false versioning.synchronize=true"
4
5# Enable mirroring at creation time
6ais create ais://abc --props="mirror.enabled=true mirror.copies=3"
7
8# Or using JSON:
9ais create ais://abc --props='{"mirror": {"enabled": true, "copies": 3}}'
10
11# Enable mirroring and tweak checksum configuration
12ais create ais://abc \
13 --props='{
14 "mirror": {"enabled": true, "copies": 3},
15 "checksum": {"type": "xxhash", "validate_warm_get": true}
16 }'
17
18# Configure a cloud bucket with provider-specific extras and rate limiting
19ais create s3://logs \
20 --props='{
21 "extra": {"aws": {"profile": "prod", "endpoint": "https://s3.example.com"}},
22 "rate_limit": {"backend": {"enabled": true, "max_bps": "800MB"}}
23 }'

Feature Flags

Feature flags are a 64-bit bitmask controlling assorted runtime behaviors. Most flags are cluster-wide, but a subset can be configured per-bucket.

Bucket-level flags

FlagTagsDescription
Skip-Loading-VersionChecksum-MDperf,integrity-Skip loading existing object’s metadata (version, checksum)
Fsync-PUTintegrity+,overheadSync object payload to stable storage on PUT
S3-Presigned-Requests3,security,compatPass-through presigned S3 requests for backend authentication
S3-Use-Path-Styles3,compatUse path-style S3 addressing (e.g., s3.amazonaws.com/BUCKET/KEY)
Resume-Interrupted-MPUmpu,opsResume interrupted multipart uploads

For the full list, see this separate Feature Flags document.

Tag meanings

  • integrity+ - enhances data safety
  • integrity- - trades safety for performance
  • perf - performance optimization
  • overhead - may impact performance
  • s3,compat - S3 compatibility

Setting bucket features

1# View available bucket features
2ais bucket props set ais://mybucket features <TAB-TAB>
3
4# Enable a feature
5ais bucket props set ais://mybucket features S3-Presigned-Request
6
7# Enable multiple features
8ais bucket props set ais://mybucket features Fsync-PUT S3-Use-Path-Style
9
10# Reset to defaults (none)
11ais bucket props set ais://mybucket features none

Some flags are mutually exclusive. For example, Disable-Cold-GET and Streaming-Cold-GET cannot both be set - the system will reject the configuration. For complete details on all feature flags (cluster-wide and bucket-level), see Feature Flags.

Bucket Lifecycle

The distinction between implicit bucket discovery and explicit creation is best summarized by the AIS CLI itself. When you run ais create --help, it outlines the specific scenarios where ‘on-the-fly’ discovery isn’t enough:

$ ais create --help
NAME:
ais create - (alias for "bucket create") Create AIS buckets or explicitly attach remote buckets with non-default credentials/properties.
Normally, AIS auto-adds remote buckets on first access (ls/get/put): when a user references a new bucket,
AIS looks it up behind the scenes, confirms its existence and accessibility, and "on-the-fly" updates its
cluster-wide global (BMD) metadata containing bucket definitions, management policies, and properties.
Use this command when you need to:
1) create an ais:// bucket in this cluster;
2) create a bucket in a remote AIS cluster (e.g., 'ais://@remais/BUCKET');
3) set up a cloud bucket with a custom profile and/or endpoint/region;
4) set bucket properties before first access;
5) attach multiple same-name cloud buckets under different namespaces (e.g., 's3://#ns1/bucket', 's3://#ns2/bucket');
6) and finally, register a cloud bucket that is not (yet) accessible (advanced-usage '--skip-lookup' option).
...

Implicit creation (lazy discovery)

On first reference:

1ais ls s3://images --all
2ais get s3://logs/foo.txt

AIS:

  1. Parses the bucket URI into internal control structure (cmn.Bck)
  2. Checks BMD for existing entry
  3. If missing: performs HEAD(bucket) to validate access
  4. Inserts the bucket into BMD with default properties
  5. Metasyncs the latter to all nodes

This behavior is foundational, motivated by removing the operational overhead of bucket management.

Explicit creation

Invoked with:

1ais create s3://bucket --props="extra.aws.profile=prod"

AIS:

  1. Parses URI and properties
  2. Issues HEAD (unless --skip-lookup or bucket already in BMD)
  3. Creates BMD entry with specified properties
  4. Metasyncs to all nodes

Use --skip-lookup when default credentials cannot access the bucket:

1ais create s3://restricted --skip-lookup \
2 --props="extra.aws.profile=special"

Deletion and eviction

AIS buckets:

1ais bucket rm ais://bucket

Destroys the bucket and all objects permanently.

Cloud buckets:

1ais bucket rm s3://bucket
2# or equivalently:
3ais evict s3://bucket

Removes AIS state (BMD entry, cached objects). Cloud data remains untouched.

Eviction options:

CommandEffect
ais evict s3://bucketRemove BMD entry and all cached objects
ais evict s3://bucket --keep-mdKeep BMD entry, remove cached objects
ais evict s3://bucket --prefix images/Evict only matching objects
ais evict s3://bucket --template "shard-{0..999}.tar"Evict by template

Eviction is namespace-aware:

1ais evict s3://#prod/data # only this namespace
2ais evict s3://#dev/data # independent operation

See also: Three Ways to Evict Remote Bucket


Namespaces

Namespaces solve real-world scenarios that global namespace cannot handle:

  • Account-scoped buckets - SwiftStack, Cloudian bucket names are per-account
  • Multiple credentials - Different AWS profiles with overlapping bucket names
  • Environment separation - Same bucket name across dev/staging/prod
  • Multiple endpoints - Oracle OCI, SwiftStack, and AWS S3 in the same cluster

Syntax

<provider>://#<namespace>/<bucket>

Examples:

1# Two buckets named "data", different AWS accounts
2ais create s3://#prod/data --props="extra.aws.profile=prod-account"
3ais create s3://#dev/data --props="extra.aws.profile=dev-account"
4
5# SwiftStack with account-scoped buckets
6ais create s3://#swift-tenant/bucket \
7 --props="extra.aws.profile=swift extra.aws.endpoint=https://swift.example.com"
8
9# S3-compatible with custom endpoint
10ais create s3://#minio/images \
11 --props="extra.aws.endpoint=http://minio.local:9000"

Note: The bucket namespace you choose - whether it represents an AWS profile, a GCS account, or simply a human-readable alias - becomes part of the bucket’s physical on-disk path.

In BMD

Metadata-wise, each bucket receives:

  • A unique BID (bucket ID)
  • Its own bucket props (Bprops)
  • Its own credential configuration

Remote AIS Clusters

AIS clusters can attach to each other, forming a global namespace of distributed datasets.

Attaching clusters

1# Attach with alias
2ais cluster remote-attach remais=http://remote-proxy:8080
3
4# Verify attachment
5ais show remote-cluster

Accessing remote buckets

1# List all remote AIS buckets
2ais ls ais://@
3
4# List buckets in specific remote cluster
5ais ls ais://@remais/
6
7# Access objects
8ais get ais://@remais/bucket/object local-file

Namespace encoding

The alias resolves to the remote cluster’s UUID, stored in the namespace:

1ais://@remais/bucket => Bck{Provider: "ais", Ns: Ns{UUID: "Cjl2Ht4gE"}, Name: "bucket"}

See also: Remote AIS Cluster


Backend Buckets

Backend buckets represent indirection - an AIS bucket that proxies to a remote bucket. This is fundamentally different from namespaces.

AspectNamespaceBackend Bucket
PurposeDisambiguate identityProxy/cache
Bucket count1 (the cloud bucket itself)2 (AIS + cloud)
On-disk path@aws/#ns/bucket/@ais/cache-bucket/
Use caseMulti-account, multi-endpointCaching, ETL, aliasing

Note: See section Working with Same-Name Remote Buckets below for further guidelines and usage examples.

Creating backend relationships

1ais create ais://cache
2ais bucket props set ais://cache backend_bck=s3://origin

Now reads/writes to ais://cache transparently forward to s3://origin.

Use cases

Hot cache for cold storage:

1ais create ais://hot-cache
2ais bucket props set ais://hot-cache backend_bck=s3://cold-archive lru.enabled=true

Dataset aliasing:

1# Always point to latest processed dataset
2ais create ais://dataset-latest
3ais bucket props set ais://dataset-latest backend_bck=gs://processed-2024-01-15
4
5# Update when new version available
6ais bucket props set ais://dataset-latest backend_bck=gs://processed-2024-01-20

Access control:

1# Expose subset of cloud data under controlled name
2ais create ais://public-subset
3ais bucket props set ais://public-subset backend_bck=s3://internal-data

Disconnecting

1ais bucket props set ais://cache backend_bck=none

Cached objects remain in the AIS bucket.

See also: Backend Bucket CLI examples

System Buckets

AIS distinguishes between user buckets and system buckets.

User buckets are created by users (or lazily on first remote access) and follow standard naming rules: alphanumeric characters, dashes, underscores, and single dots are allowed, up to 64 characters.

System buckets are AIS-internal infrastructure. They are created automatically when needed and are identified by a reserved dot-prefix: names starting with . are reserved for system use. Any attempt to create a user bucket with a .-prefixed name is rejected unless it matches a known system bucket.

The current naming convention is .sys-*. The first system bucket is:

BucketPurposeIntroduced
ais://.sys-inventoryStores native bucket inventory (NBI) snapshots as chunked objectsv4.3
ais://.sys-shardidxStores shard indexes for direct random access into TAR archivesv4.4

Naming rules

Bucket names are limited to 64 bytes and may contain only letters, digits, dashes (-), underscores (_), and single dots (.). Consecutive dots (..) are not allowed.

Names that start with . are reserved for system buckets. User-defined buckets therefore cannot use a leading dot, and any unrecognized .-prefixed name is rejected.

Visibility and lifecycle

System buckets are visible in regular ais ls output and can be listed and read with appropriate permissions. They are not intended for direct user writes - AIS creates and destroys them behind the scenes, and manages their content.

System buckets are created on demand (for example, ais://.sys-inventory is created on the fly upon the first inventory creation request) and follow the same cluster-wide replication and metasync lifecycle as user buckets.

Future plans

The .sys-* namespace is designed to accommodate additional AIS-internal services over time. Planned and potential uses include:

  • Checkpoints - durable storage for multi-hour and multi-day distributed training jobs
  • Logs - cluster-wide log aggregation and archival
  • Indexes - content indexes for direct access into large archives (TAR, ZIP)
  • Inventories - already implemented in v4.3 via ais://.sys-inventory

Part II: How-To

Table of Contents

  1. Working with Same-Name Remote Buckets
  2. Working with Remote AIS Clusters
  3. Prefetch and Evict
  4. Access Control
  5. Provider-Specific Configuration
  6. List Objects
  7. Operations Summary
  8. CLI Quick Reference
  9. Appendix A: On-Disk Layout
  10. Reference

Working with Same-Name Remote Buckets

A common scenario: you have buckets with identical names across different AWS accounts, S3-compatible endpoints, or cloud providers. AIS handles this two ways.

Option A: Namespaces (direct access)

Create each bucket with its own namespace and credentials:

1ais create s3://#prod/data --props="extra.aws.profile=prod-account"
2ais create s3://#dev/data --props="extra.aws.profile=dev-account"

Now s3://#prod/data and s3://#dev/data are distinct buckets - separate BMD entries, separate on-disk paths, separate credentials. Access them directly:

1ais ls s3://#prod/data
2ais get s3://#dev/data/file.txt ./local

Option B: Backend buckets (via AIS proxy)

Create AIS buckets that front the remote buckets:

1# First, create the namespaced S3 buckets with proper credentials
2ais create s3://#prod/data --props="extra.aws.profile=prod-account"
3ais create s3://#dev/data --props="extra.aws.profile=dev-account"
4
5# Then create AIS buckets fronting them
6ais create ais://prod-data --props="backend_bck=s3://#prod/data"
7ais create ais://dev-data --props="backend_bck=s3://#dev/data"

Access through the AIS buckets:

1ais ls ais://prod-data
2
3## GET and discard locally, with a side effect of **cold-GET**ting an object from remote storage
4ais get ais://dev-data/images.jpg /dev/null

Which to use?

ScenarioRecommended
Direct multi-account access, no caching logicNamespaces (Option A)
Need LRU eviction, local caching policiesBackend buckets (Option B)
ETL pipelines, dataset transformationBackend buckets (Option B)
Want to rename or alias cloud bucketsBackend buckets (Option B)
Simplest setup, fewest moving partsNamespaces (Option A)

Namespaces give you direct access with minimal overhead. Backend buckets add a layer of indirection but unlock full AIS bucket capabilities - LRU, mirroring, erasure coding, and transformation pipelines.

Note that Option B requires the namespaced S3 bucket to exist first. You can’t skip straight to backend_bck=s3://data with custom credentials - AIS needs to resolve the backend bucket, which requires proper credentials already in place. Create the namespaced cloud bucket first, then front it with an AIS bucket if needed.

See also: AWS Profiles and S3 Endpoints

GCP / Google Cloud Storage

PropertyDescription
extra.gcp.application_credsAbsolute path to a GCP service-account JSON key file; overrides the global GOOGLE_APPLICATION_CREDENTIALS for this bucket
1# Inspect GCP-specific knobs (JSON shows all fields, "-" means unset)
2$ ais bucket props show gs://my-bucket extra.gcp --json
3{
4 "extra": {
5 "gcp": {
6 "application_creds": "-"
7 }
8 }
9}
10
11# Assign per-bucket credentials
12ais bucket props set gs://my-bucket \
13 extra.gcp.application_creds=/etc/ais/sa-team-b.json
14
15# Register a bucket the default service account cannot reach
16ais create gs://restricted --skip-lookup \
17 --props="extra.gcp.application_creds=/etc/ais/sa-restricted.json"
18
19# Two GCS buckets, different GCP projects
20ais create gs://#proj-a/data --skip-lookup \
21 --props="extra.gcp.application_creds=/etc/ais/sa-proj-a.json"
22ais create gs://#proj-b/data --skip-lookup \
23 --props="extra.gcp.application_creds=/etc/ais/sa-proj-b.json"

See also: GCP Per-Bucket Credentials


Working with Remote AIS Clusters

AIS clusters can be attached to each other, forming a global namespace of all individually hosted datasets. For background and configuration details, see Remote AIS Cluster.

Attach remote cluster

1# attach a remote AIS cluster with alias `teamZ`
2$ ais cluster attach teamZ=http://cluster.ais.org:51080
3Remote cluster (teamZ=http://cluster.ais.org:51080) successfully attached
4
5# Verify the attachment
6$ ais show remote-cluster
7
8UUID URL Alias Primary Smap Targets Online
9MCBgkFqp http://cluster.ais.org:51080 teamZ p[primary] v317 10 yes

List buckets and objects in remote clusters

1# List all buckets in all remote AIS clusters
2# By convention, `@` prefixes remote cluster UUIDs
3$ ais ls ais://@
4
5AIS Buckets (4)
6 ais://@MCBgkFqp/imagenet
7 ais://@MCBgkFqp/coco
8 ais://@MCBgkFqp/imagenet-augmented
9 ais://@MCBgkFqp/imagenet-inflated
10
11# List buckets in a specific remote cluster (by alias or UUID)
12$ ais ls ais://@teamZ
13
14AIS Buckets (4)
15 ais://@MCBgkFqp/imagenet
16 ais://@MCBgkFqp/coco
17 ais://@MCBgkFqp/imagenet-augmented
18 ais://@MCBgkFqp/imagenet-inflated
19
20# List objects in a remote bucket
21$ ais ls ais://@teamZ/imagenet-augmented
22NAME SIZE
23train-001.tgz 153.52KiB
24train-002.tgz 136.44KiB
25...

Prefetch and Evict

Prefetching

Proactively fetch objects from remote storage into AIS cache:

1# Prefetch by list
2ais prefetch s3://bucket --list "obj1,obj2,obj3"
3
4# Prefetch by prefix
5ais prefetch s3://bucket --prefix "images/"
6
7# Prefetch by template
8ais prefetch s3://bucket --template "shard-{0000..0999}.tar"
9
10# With parallelism control
11ais prefetch s3://bucket --prefix data/ --num-workers 16

Monitoring prefetch

1# Check progress
2ais show job prefetch
3
4# With auto-refresh
5ais show job prefetch --refresh 5
6
7# Wait for completion
8ais wait prefetch JOB_ID

Evicting

Remove cached objects (cloud data untouched):

1# Evict entire bucket
2ais evict s3://bucket
3
4# Keep metadata, remove objects
5ais evict s3://bucket --keep-md
6
7# Evict by prefix
8ais evict s3://bucket --prefix old-data/
9
10# Evict by template
11ais evict s3://bucket --template "temp-{0..999}.dat"

Note: The terms “cached” and “in-cluster” are used interchangeably. A “cached” object is one that exists in AIS storage regardless of its origin.


Access Control

Bucket access is controlled by a 64-bit access property. Bits map to operations:

Note: When enabled, access permissions are enforced by AIS and apply to both local and backend operations; misconfiguration can block cold GETs or deletes. See version 4.1 release notes for additional pointers on the topics of authentication and security.

OperationBitHex
GET00x1
HEAD10x2
PUT, APPEND20x4
Cold GET30x8
DELETE40x10

Setting access

1# Make bucket read-only
2ais bucket props set ais://data access=ro
3
4# Disable DELETE
5ais bucket props set ais://data access=0xFFFFFFFFFFFFFFEF

Predefined values

NameMeaning
roRead-only (GET + HEAD)
rwFull access (default)

See also: Authentication and Access Control


Provider-Specific Configuration

AWS / S3-compatible

PropertyDescription
extra.aws.profileNamed AWS profile from ~/.aws/credentials
extra.aws.endpointCustom S3 endpoint URL
extra.aws.cloud_regionRegion override
1# Use named profile
2ais create s3://bucket --props="extra.aws.profile=production"
3
4# S3-compatible endpoint (SwiftStack, Oracle OCI, AWS S3, etc.)
5ais create s3://#minio/bucket \
6 --props="extra.aws.endpoint=http://minio:9000 extra.aws.profile=minio-creds"

See also: AWS Profiles and S3 Endpoints

Oracle Cloud Infrastructure

PropertyDescription
extra.oci.regionOCI region override for this bucket
1# Two same-name OCI buckets in different regions, separated by namespace
2ais create oc://#iad/data --props="extra.oci.region=us-ashburn-1"
3ais create oc://#phx/data --props="extra.oci.region=us-phoenix-1"

Google Cloud

PropertyDescription
extra.gcp.project_idGCP project ID

Azure

PropertyDescription
extra.azure.account_nameStorage account name
extra.azure.account_keyStorage account key

List Objects

ListObjects and ListObjectsPage API (Go and Python) return object names and properties. For large - many millions of objects - buckets we strongly recommend the “paginated” version of the API.

AIS CLI supports both - a quick glance at ais ls --help will provide an idea of all (numerous) supported options.

Basic usage

As always with AIS CLI, a quick look at the command’s help (ais ls --help in this case) may save time.

1# When remote bucket does not exist in AIS _and_ is accessible with default profile/endpoint
2$ ais ls s3://bucket
3Error: ErrRemoteBckNotFound: aws bucket "s3://ais-vm" does not exist
4Tip: use '--all' to list all objects including remote
5
6# and note: `--all` can be used just once and only when the remote is not in AIS
7$ ais ls s3://bucket --all --limit 4

More basic examples follow below:

1# List all objects
2ais ls s3://bucket
3
4# List with prefix
5ais ls s3://bucket --prefix images/
6
7# Same as above
8ais ls s3://bucket/images/
9
10# List cached only
11ais ls s3://bucket --cached
12
13# Summary (counts and sizes)
14ais ls s3://bucket --summary

Properties

Request specific properties with --props:

PropertyDescription
nameObject name (always included)
sizeObject size
versionObject version
checksumObject checksum
atimeLast access time
locationTarget and mountpath
copiesNumber of copies
ecErasure coding info
statusObject status
1ais ls s3://bucket --props "name,size,atime,copies"

Flags

FlagDescription
--cachedOnly objects present in AIS
--allInclude all buckets (remote and present)
--regexFilter by regex pattern
--summaryShow aggregate statistics
--limit NReturn at most N objects

Pagination

For large buckets, results are paginated:

1# API returns continuation_token for next page
2# CLI handles pagination automatically
3ais ls s3://large-bucket --limit 10000

See also: CLI: List Objects


Operations Summary

CommandBehavior
ais ls <bucket>List objects; implicit create for remote buckets
ais create <bucket>Explicit creation with optional properties
ais bucket rm <ais-bucket>Destroy AIS bucket and all objects
ais bucket rm <cloud-bucket>Remove from BMD, evict cached objects
ais evict <bucket>Same as rm for cloud buckets
ais prefetch <bucket>Proactively cache remote objects
ais bucket props setUpdate properties, metasync cluster-wide
ais bucket props resetRestore cluster defaults
ais bucket props showDisplay current properties

All operations respect namespaces. ais ls s3://#ns1/bucket and ais ls s3://#ns2/bucket operate on different buckets.


CLI Quick Reference

1# ─────────────────────────────────────────────────────────────
2# Implicit creation - bucket added on first access
3# ─────────────────────────────────────────────────────────────
4ais ls s3://my-bucket
5ais get s3://my-bucket/file.txt ./local-file
6
7# ─────────────────────────────────────────────────────────────
8# Explicit creation - custom credentials
9# ─────────────────────────────────────────────────────────────
10ais create s3://my-bucket --props="extra.aws.profile=prod"
11
12# ─────────────────────────────────────────────────────────────
13# Namespaced buckets - same name, different accounts
14# ─────────────────────────────────────────────────────────────
15ais create s3://#acct1/data --props="extra.aws.profile=acct1"
16ais create s3://#acct2/data --props="extra.aws.profile=acct2"
17
18# ─────────────────────────────────────────────────────────────
19# Skip lookup - bucket exists but default creds can't reach it
20# ─────────────────────────────────────────────────────────────
21ais create s3://restricted --skip-lookup \
22 --props="extra.aws.profile=special"
23
24# ─────────────────────────────────────────────────────────────
25# Remote AIS cluster
26# ─────────────────────────────────────────────────────────────
27ais cluster remote-attach remais=http://remote:8080
28ais ls ais://@remais/
29ais create ais://@remais/new-bucket
30
31# ─────────────────────────────────────────────────────────────
32# Backend buckets - AIS bucket fronting cloud
33# ─────────────────────────────────────────────────────────────
34ais create ais://cache
35ais bucket props set ais://cache backend_bck=s3://origin
36
37# ─────────────────────────────────────────────────────────────
38# Prefetch and evict
39# ─────────────────────────────────────────────────────────────
40ais prefetch s3://bucket --prefix data/
41ais evict s3://bucket --keep-md
42
43# ─────────────────────────────────────────────────────────────
44# Properties
45# ─────────────────────────────────────────────────────────────
46ais bucket props show s3://bucket
47ais bucket props set s3://bucket mirror.enabled=true mirror.copies=2
48ais bucket props reset s3://bucket

Appendix A: On-Disk Layout

Note: This section is provided for advanced troubleshooting and debugging only.

The bucket identity you specify in CLI or API - provider, namespace, bucket name - materializes as directory structure on every mountpath. This isn’t just metadata; it’s physical layout.

Say, we have an S3 bucket called s3://dataset, and an object images/cat.jpg in it. Given two different bucket namespaces, the respective FQNs inside AIStore may look like:

/ais/mp1/@aws/#prod/dataset/%ob/images/cat.jpg
and
/ais/mp4/@aws/#dev/dataset/%ob/images/cat.jpg

where:

ComponentExample 1Example 2Meaning
Mountpath/ais/mp1/ais/mp4Physical disk or partition
Provider@aws@awsBackend provider
Namespace#prod#devAccount, profile, or user-defined alias
BucketdatasetdatasetBucket name
Content type%ob%obContent kind: objects, EC slices, chunks, metadata
Objectimages/cat.jpgimages/cat.jpgObject name (preserves virtual directory structure)

Note: disk partitioning not recommended, may degrade performance.

The namespace you choose - whether it maps to an AWS profile, a SwiftStack account, or just a human-readable tag like #prod - becomes a physical directory on every target node. This guarantees:

  • Isolation: s3://#acct1/data and s3://#acct2/data never share storage paths
  • No collision: Same-name buckets with different namespaces coexist without conflict

What starts as a logical identifier in ais create s3://#prod/bucket ends up as /mpath/@aws/#prod/bucket/ on disk.

For details, see On-Disk Layout document.


References