Use the steps here to deploy your app to Leapcell (for serverless hosting with included SSL), Neon (for sever less database), and AWS S3 (for static files and media storage).

Static Files & S3[edit]

First, we'll add our static files to AWS S3.

Update App Settings[edit]

Start by replacing this:

# config/settings.py

STATIC_URL = "/static/"
STATICFILES_DIRS = [os.path.join(BASE_DIR, "static")]
STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")

with this:

# config/settings.py

STATICFILES_DIRS = [os.path.join(BASE_DIR, "static")]
STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")

AWS_ACCESS_KEY_ID = env("S3_ACCESS_KEY_ID")
AWS_SECRET_ACCESS_KEY = env("S3_SECRET_ACCESS_KEY")
AWS_STORAGE_BUCKET_NAME = env("S3_STORAGE_BUCKET_NAME")
AWS_S3_REGION_NAME = env("S3_REGION_NAME")  # e.g., us-east-1
AWS_S3_CUSTOM_DOMAIN = f'{AWS_STORAGE_BUCKET_NAME}.s3.amazonaws.com'

# For serving static files directly from S3
AWS_S3_USE_SSL = True
AWS_S3_VERIFY = True
STATIC_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/staticfiles/'
STORAGES = {
    "default": {
        "BACKEND": "storages.backends.s3boto3.S3Boto3Storage",
        "OPTIONS": {
            "bucket_name": env("S3_STORAGE_BUCKET_NAME"),
            "region_name": env("S3_REGION_NAME"),
        },
    },
    "staticfiles": {
        "BACKEND": "storages.backends.s3boto3.S3Boto3Storage",
        "OPTIONS": {
            "bucket_name": env("S3_STORAGE_BUCKET_NAME"),
            "region_name": env("S3_REGION_NAME"),
            "location": "staticfiles",
        },
    },
}

Create staticfiles Directory[edit]

Run the following command to place all static files in a dedicated staticfiles app:

(.venv) % python3 manage.py collectstatic --noinput

Create S3 Bucket[edit]

Next, we need to create an AWS S3 bucket in the AWS Console. Click "Create Bucket" and set the following settings:

Bucket name: Choose any name that is descriptive of your app.

Deselect block all public access (then select the "I acknowledge..." warning that pops up)

Leave all other settings as the default, then click "Create bucket".

Click on the bucket you just created, click on the "Permissions" tab, and copy/paste the following JSON into the corresponding areas (be sure to hit save for each section):

// Bucket policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::toddvharper-pkm-static/*"
        }
    ]
}
// Cross-origin resource sharing (CORS)

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "POST",
            "GET",
            "PUT"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": []
    }
]

Upload the staticfiles directory we created to the Objects tab of our S3 bucket.

Set Up IAM User[edit]

Navigate to IAM at THIS LINK and create a dedicated IAM user for this app. Grant an 'AmazonS3FullAccess' permission policy. Head to the "Security credentials" tab for the user and create a new access key. Once created, you'll be provided a secret access key (which you will only be able to view once, so be sure to copy it now) as well as an access key ID (which you can view any time).

Environment Variables[edit]

Update your .env file with the following:

S3_ACCESS_KEY_ID = <the access key id you just received>
S3_SECRET_ACCESS_KEY = <the secret access key you just received>
S3_STORAGE_BUCKET_NAME = <the name of your bucket>
S3_REGION_NAME = <your region - ie. us-east-2>

If these look familiar, you just added these environment variables to your settings file in the Update App Settings section above!

Install Dependencies[edit]

Run the following in the terminal:

% pip install django-storages
% pip install boto3
% pip freeze > requirements.txt

PostgreSQL & Neon[edit]

Up until now, we've been using a SQLite3 database. In production, we'll use PostgreSQL hosted in Neon.

Database Prep[edit]

Start by updating database info in the app settings to the following:

# settings.py

DATABASE_URL = env("DATABASE_URL", default="sqlite:///db.sqlite3")
DATABASES = {"default": dj_database_url.parse(DATABASE_URL, conn_max_age=600)}

In your .env file, set:

DATABASE_URL = sqlite:///db.sqlite3

Install Database Dependencies[edit]

While it's not essential since we're not running it locally, let's go ahead and ensure we've got PostgreSQL installed on our machine. We'll also add some more project dependencies. Run the following in the terminal:

(.venv) % brew install postgresql
(.venv) % python3 -m pip install “psycopg[binary](.venv) % python3 -m pip install gunicorn

Create the Database[edit]

Create a new project in Neon. Give the project and database a name. I chose AWS for cloud service provider and US East 2 for region.

Click on "Connect your database" and copy the connection string. You'll be using this in several places, so don't lose track of it!

GitHub Actions Prep[edit]

Head to GitHub repository settings > Secrets and variables > Actions and add the following secrets:

DATABASE_URL: <the url / connection string you just copied>
SECRET_KEY: <from your .env file>
SUPERUSER_EMAIL: <the email address you want to use for your superuser>
SUPERUSER_NAME: <the name you want to use for your superuser>
SUPERUSER_PASSWORD: <the password you want to use for your superuser>

Migrate Database Action[edit]

Click on Actions and add the following to a GitHub yaml file (you can name it something like migrate.yaml). This file should remain in your repository, as it will run each time you make a commit to the production branch, saving all of your migrations to the database:

name: Django Migrations

on:
  push:
    branches:
      - main

jobs:
  migrate:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Code
        uses: actions/checkout@v2

      - name: Set Up Python
        uses: actions/setup-python@v3
        with:
          python-version: "3.13.1"

      - name: Install Dependencies
        run: |
          pip install -r requirements.txt

      - name: Run Migrations
        env:
          SECRET_KEY: ${{ secrets.SECRET_KEY }}
          DATABASE_URL: ${{ secrets.DATABASE_URL }}
        run: |
          python manage.py migrate --noinput

Create Superuser Action[edit]

Since Leapcell doesn't have it's a built in console or CLI integration, we'll need to use a bit of brute force to create a superuser in our production database.

Similarly to above, create a new GitHub actions workflow. But this time, only run it once and then delete it from the repository (assuming it runs successfully).

name: Create Django Superuser
on: push
jobs:
  create_superuser:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2

      - name: Set up Python
        uses: actions/setup-python@v3
        with:
          python-version: "3.13.1"

      - name: Install dependencies
        run: |
          pip install -r requirements.txt

      - name: Create Superuser
        env:
          SECRET_KEY: ${{ secrets.SECRET_KEY }}
          DATABASE_URL: ${{ secrets.DATABASE_URL }}
          DJANGO_SUPERUSER_NAME: ${{ secrets.SUPERUSER_NAME }}
          DJANGO_SUPERUSER_EMAIL: ${{ secrets.SUPERUSER_EMAIL }}
          DJANGO_SUPERUSER_PASSWORD: ${{ secrets.SUPERUSER_PASSWORD }}
        run: |
          python manage.py shell <<EOF
          from django.contrib.auth import get_user_model
          import os

          User = get_user_model()
          username = os.getenv('DJANGO_SUPERUSER_NAME')
          email = os.getenv('DJANGO_SUPERUSER_EMAIL')
          password = os.getenv('DJANGO_SUPERUSER_PASSWORD')

          print(f"DEBUG: Attempting to create superuser {username} with email {email}")

          if not User.objects.filter(username=username).exists():
              User.objects.create_superuser(username, email, password)
              print("✅ Superuser created successfully!")
          else:
              print("⚠️ Superuser already exists.")
          EOF

That's it! Your database should be ready to go and your first superuser should already exist in the auth_user table

Start Hosting on Leapcell[edit]

We're almost finished! Just a couple more things to do before this app is live!

Update App Settings[edit]

Change these items in your app's settings:

ALLOWED_HOSTS = ["<your-app-url>", "localhost"]
CSRF_TRUSTED_ORIGINS = ["https://*.<your-domain>"]

Update Requirements.txt[edit]

Since we've added some dependencies to the project, let's run:

(.venv) % pip freeze > requirements.txt

Then make a final commit before deploying our project.

Create Leapcell Service[edit]

Log in to Leapcell, create a new service, and link it to your project's GitHub repository.

Give the service a name and set (at least) the following environment variables:

DATABASE_URL: <the url / connection string from Neon>
SECRET_KEY: <from your .env file>
DEBUG: True
S3_ACCESS_KEY_ID: <the access key id from S3>
S3_SECRET_ACCESS_KEY = <the secret access key from S3>
S3_STORAGE_BUCKET_NAME = <the name of your S3 bucket>
S3_REGION_NAME = <your region - ie. us-east-2>

Connect Domain Name[edit]

If you are using a custom domain name for your app, follow the instructions in Leapcell for adding an A record to your DNS and then add any routing rules you'd like (though you've probably taken care of that internally in urls.py files). Still, at least select a service destination for the root path. Then click on "Create Domain".

That's it! You should now be able to visit your live website!