How To Upload To Amazon S3 From GitHub Actions


GitHub hero

GitHub Actions is a powerful CI/CD tool for running automated builds from your GitHub repository. The last step of this process is deployment, which involves uploading the build artifacts somewhere they can be accessed easily. AWS S3 is the storage of choice for many people, and is easy to integrate with Actions scripts.

Why Use AWS S3 Buckets For Deployment?

It’s worth noting that GitHub Actions does have a basic artifact storage system, however, you won’t want to use this in production. It’s for archival and testing purposes, and artifacts from completed builds will expire after 90 days.

There’s also GitHub Packages, which is designed as a replacement for language specific package managers like JavaScript’s npm. This can be very useful if you’re publishing an NPM package, but isn’t as useful for other kinds of builds.

For every other type of artifact, Amazon’s S3 buckets remain one of the go-to ways to upload files for distribution and deployment. They’re backed by AWS’s IAM permissions system, which has excellent security and is able to fine-tune access controls to your compiled source code.

Alternatively, if you’re using Docker containers for deployment, you will want to use a container registry instead. Luckily, GitHub has a private registry that is easy to integrate with, and you can read our guide to using it to learn more.

RELATED: How to Get Started with Github’s New Docker Container Registry

Using S3 is fairly easy, and most of the steps shown here will also apply to S3-compatible storage solutions, like Digital Ocean Spaces or self-hosted MinIO, since they use the same API.

Uploading To AWS S3 From GitHub Actions

To get started, you’ll want to make sure the rest of your GitHub Actions build script is working and producing a valid build, as you generally don’t want to be debugging multiple issues at once.

If you don’t have one yet, the setup will depend on your build toolchain, but you can read our guide to setting up automated builds to learn more. You can also test the artifact that will be uploaded by using GitHub’s built-in upload-artifact action, which publishes the contents of a directory as a bundle.

You can then confirm the bundle creation in the Summary > Artifacts.

Once you have a build that isn’t failing, you can add the S3 uploading to the end of it. There isn’t an official way to do this easily, and many different solutions exist on the GitHub Actions Marketplace.

The most popular one is S3 Sync, which uses the native S3 API to upload built artifacts, and is easy to configure. There’s also simple wrappers like s3cmd, which let you pass commands directly to the S3 CLI.

One note though—most of these rely on Linux hosted runners or Docker containers, which come with the necessary dependencies for the S3 CLI to function. Linux is what the majority of builds run on, but if you need to use Windows to run your builds , you’ll need to use a different action. The cross-platform one that we’ve found to work is stcalica/s3-upload. This uses a JavaScript wrapper which installs the s3cmd package, and works just fine on Windows.

One thing that you will need to do first is set up GitHub Secrets for your AWS authentication tokens. These can’t be public of course, and will need to be stored on the repository’s secrets and accessed by name. This prevents accidental leakage of your tokens, and allows for easy key management.

You can read our guide on using GitHub Secrets to learn more about them, but the main thing required from you is to head to the repository settings, then click Secrets > Actions and make a

Then, at the end of your GitHub Actions script, add a named “deploy” step, and set it to use the s3-sync action, or whichever one you’ve chosen. You’ll need to pass the Secrets for the access key and ID you configured as environment variables.

    ...
    - name: Deploy To S3
    - uses: actions/checkout@master
    - uses: jakejarvis/s3-sync-action@master
      with:
        args: --acl public-read --follow-symlinks --delete
      env:
        AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
        AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
        AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        AWS_REGION: 'us-east-1'   # optional: defaults to us-east-1
        SOURCE_DIR: 'bin/Linux/net48'      # optional: defaults to entire repository
        DEST_DIR: "https://www.howtogeek.com/" # optional: defaults to root of the bucket

With this workflow in particular, you can also pass arguments directly to s3cmd, which can be used to turn on public read ACLs, for example. Here, the object is set to public, and the old contents at that directory in the bucket is wiped, ensuring everything matches the build output with no old files.

With that added, all that’s left is to commit the build script changes, and optionally re-run the build manually if it doesn’t automatically start again on a new commit. You won’t see any build output in GitHub, because it’s been sent to S3, but you can check the logs from s3cmd under the “deploy to S3” step in the build log:

Hopefully, you’ll see a log output similar to the one above, confirming the process ran successfully.





Source link

Previous articleElmhurst Residents Targets Of Bitcoin Scams: Police
Next articleInflation Grows Less Scary, and Bitcoin Less Exciting. Coming Up, Earnings