April 14, 2026

Deploying Astro to Firebase Hosting with SEO Done Right

astrofirebaseseowebtutorial

I’ve migrated 4 websites to Astro + Firebase Hosting in the last couple of weeks. apialerts.com, hooks.apialerts.com, hapi.apialerts.com, and this site. I’ve got one more to do after this (shadowcat.app).

I’m not a web developer. I’m a native Android/iOS dev. But after doing this 4 times I’ve got a setup that works well and I want to document it before I forget.

This post covers the Astro config, Firebase config, SEO setup, and GitHub Actions deployment. All the stuff I wish someone had written down in one place.

Why Firebase Hosting?

Most Astro tutorials use Vercel, Netlify, or Cloudflare Pages. They’re all great and honestly easier to set up than Firebase. So why Firebase?

I’m deep in the Google ecosystem. My backends run on Cloud Run, my databases are on CloudSQL, my auth is Firebase Auth, my analytics are Firebase/GA. All my infrastructure is already in Google Cloud. Adding hosting to the same project keeps everything in one place with one billing account and one set of IAM permissions.

I use Cloudflare for DNS and someone might ask why I don’t just use Cloudflare Pages. Fair question. Cloudflare Pages would remove the need for a GitHub Actions workflow entirely. But then my hosting is in Cloudflare while everything else is in Google Cloud. I’d rather have a slightly more manual deploy setup than split my infrastructure across providers.

If you don’t have an existing Google Cloud setup, Vercel or Cloudflare Pages are probably easier choices. But if you’re already in the Google ecosystem, Firebase Hosting fits right in.

Astro config

Here’s the core of my astro.config.mjs across all my sites:

export default defineConfig({
  site: SITE_URL,
  output: 'static',
  trailingSlash: 'never',
  build: {
    format: 'file',
  },
})

The important bits:

  • output: 'static' - We’re generating plain HTML files. No server runtime needed. Firebase Hosting serves static files and that’s all we need.
  • trailingSlash: 'never' - This means /blog not /blog/. Pick one and be consistent. I went with no trailing slashes across all my sites.
  • build.format: 'file' - This generates /blog.html instead of /blog/index.html. Combined with Firebase’s cleanUrls setting (more on that below), you get clean URLs without trailing slashes.

Getting trailing slashes wrong means duplicate content in Google’s eyes. /blog and /blog/ are two different URLs. Pick one, enforce it everywhere.

Firebase config

Here’s my firebase.json:

{
  "hosting": {
    "public": "dist",
    "ignore": [
      "firebase.json",
      "**/.*",
      "**/node_modules/**"
    ],
    "cleanUrls": true,
    "trailingSlash": false,
    "headers": [
      {
        "source": "**/*.@(js|css|svg|png|jpg|webp|woff2)",
        "headers": [
          {
            "key": "Cache-Control",
            "value": "public, max-age=31536000, immutable"
          }
        ]
      }
    ]
  }
}
  • cleanUrls: true - Serves blog.html at /blog without the extension. This is what makes the build.format: 'file' work with clean URLs.
  • trailingSlash: false - Firebase will redirect /blog/ to /blog. This matches our Astro config. If someone links to your page with a trailing slash, they still get to the right place and Google sees a redirect, not duplicate content.
  • Cache headers - Static assets get cached for a year. Astro hashes filenames on build so cache busting happens automatically.

Sitemap with git dates

Astro has a sitemap integration but by default it doesn’t include lastmod dates. Google uses these to know when content was last updated. I pull them from git history:

import { execSync } from 'node:child_process'
import { readdirSync } from 'node:fs'
import { join, relative } from 'node:path'

function getGitLastModified() {
  const map = new Map()
  const dirs = [
    join(process.cwd(), 'src/pages'),
    join(process.cwd(), 'src/content'),
  ]

  function walk(dir) {
    for (const entry of readdirSync(dir, { withFileTypes: true })) {
      const full = join(dir, entry.name)
      if (entry.isDirectory()) { walk(full); continue }
      if (!/\.(astro|md|mdx)$/.test(entry.name)) continue
      try {
        const date = execSync(
          `git log -1 --format=%cI -- "${full}"`,
          { encoding: 'utf-8' }
        ).trim()
        map.set(full, date || new Date().toISOString())
      } catch {
        map.set(full, new Date().toISOString())
      }
    }
  }

  dirs.forEach((d) => {
    try { walk(d) } catch { /* skip */ }
  })
  return map
}

Then pass it to the sitemap integration:

sitemap({
  serialize: (item) => {
    const url = item.url.replace(/\/$/, '') || item.url
    const lastmod = urlDateMap.get(url)
    return { ...item, url, ...(lastmod ? { lastmod } : {}) }
  },
}),

The fallback to new Date().toISOString() is important. If git has no history for a file (e.g. shallow clone, new branch), the sitemap still gets a date instead of nothing. The checkout step in your GitHub Actions workflow also needs fetch-depth: 0 for full git history. Without it, git log only sees the latest commit.

One thing that tripped me up: I name my blog post files with a numbered prefix to keep them ordered in the filesystem (001-my-post.md, 002-another-post.md). The slug in the URL strips that prefix, so /blog/my-post not /blog/001-my-post. When mapping file paths to sitemap URLs, you need to strip that prefix too or the dates won’t match up:

// When building the URL from a content file path
let rel = relative(contentDir, filePath)
  .replace(/\.(md|mdx)$/, '')
  .replace(/\/\d+-/, '/')  // strip numbered prefix

If you don’t number your files, you don’t need that last replace. But if you do, you’ll spend an hour wondering why your blog post dates are missing from the sitemap. Ask me how I know.

Canonical URLs

Every page should have a canonical URL. This tells Google which version of a page is the “real” one. In my base layout:

<link rel="canonical" href={canonical} />

This matters most when you cross-post content. If I write a blog post here and cross-post it to dev.to, I set the canonical on dev.to to point back to my site. Google credits the original source.

For blog posts, I support an optional canonicalUrl in the frontmatter. If set, it overrides the default. This is useful when a post originated somewhere else and I’m mirroring it here:

---
title: "My Post"
canonicalUrl: "https://dev.to/jaredhall/my-post-123"
---

robots.txt for staging vs production

I use a dynamic robots.txt that blocks crawlers on staging:

import type { APIRoute } from 'astro'

export const GET: APIRoute = () => {
  const noindex = import.meta.env.PUBLIC_NOINDEX === 'true'
  const siteUrl = import.meta.env.SITE_URL ?? 'https://mononz.com'

  const content = noindex
    ? `User-agent: *\nDisallow: /\n`
    : `User-agent: *\nAllow: /\n\nSitemap: ${siteUrl}/sitemap-index.xml\n`

  return new Response(content, {
    headers: { 'Content-Type': 'text/plain' },
  })
}

Set PUBLIC_NOINDEX=true in your staging .env and Google won’t index your staging site. I’ve seen staging sites show up in search results and it’s not a good look.

RSS feed

If you have a blog, add an RSS feed. It’s a few lines of code and some tools and aggregators still use them:

import rss from '@astrojs/rss'
import { getCollection } from 'astro:content'

export async function GET(context) {
  const posts = (await getCollection('blog', ({ data }) => !data.draft))
    .sort((a, b) => b.data.pubDate.valueOf() - a.data.pubDate.valueOf())

  return rss({
    title: 'mononz',
    description: 'Dev log by Jared Hall.',
    site: context.site,
    items: posts.map((post) => ({
      title: post.data.title,
      description: post.data.description,
      pubDate: post.data.pubDate,
      link: `/blog/${post.id.replace(/\.(md|mdx)$/, '').replace(/^\d+-/, '')}`,
      categories: post.data.tags,
    })),
  })
}

Structured data

Google uses structured data to understand what your pages are about. I add JSON-LD to every page. For a personal site:

<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Person",
  "name": "Jared Hall",
  "url": "https://mononz.com",
  "jobTitle": "Native App Developer",
  "sameAs": [
    "https://github.com/mononz",
    "https://x.com/mononz",
    "https://linkedin.com/in/mononzz"
  ]
}
</script>

For blog posts, I add Article and BreadcrumbList schemas. These help Google show rich results with dates and breadcrumb trails in search.

GitHub Actions deployment

Here’s the workflow that deploys on push to master:

name: Production

on:
  push:
    branches:
      - master

jobs:
  build_and_deploy:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      pull-requests: write
      checks: write

    steps:
      - name: Checkout repository
        uses: actions/checkout@v6
        with:
          fetch-depth: 0

      - uses: actions/setup-node@v6
        with:
          node-version: 22
          cache: 'npm'
          cache-dependency-path: package-lock.json

      - name: Install
        run: npm ci

      - name: Build
        run: npm run build

      - uses: FirebaseExtended/action-hosting-deploy@v0
        with:
          repoToken: '${{ secrets.GITHUB_TOKEN }}'
          firebaseServiceAccount: '${{ secrets.GCP_CREDENTIALS }}'
          channelId: live
          projectId: your-firebase-project-id

The fetch-depth: 0 is important. Without it, GitHub only checks out the latest commit and the git-based sitemap dates won’t work.

You need a Firebase service account key stored as GCP_CREDENTIALS in your repo secrets. You can create one in the Google Cloud Console under IAM > Service Accounts. Give it the Firebase Hosting Admin role and export the JSON key.

Google Cloud Console setup

A few things to do in the Google Cloud Console for Firebase Hosting:

  1. Create a service account for CI/CD. Go to IAM & Admin > Service Accounts, create one with Firebase Hosting Admin permissions, and download the JSON key. This goes into your GitHub repo as a secret.
  2. Connect your custom domain in the Firebase Console under Hosting > Custom domains. Firebase will give you DNS records to add to your domain registrar. SSL is handled automatically.
  3. Set up separate projects for staging and production if you need staging. I use separate Firebase projects so staging has its own URL and doesn’t touch production.

Google Search Console

Once your site is live, set it up in Google Search Console. This is how you tell Google your site exists and monitor how it’s performing in search.

  1. Verify your domain - Firebase Hosting gives you SSL automatically, so HTTPS verification should be straightforward.
  2. Submit your sitemap - Point it at /sitemap-index.xml. This is the sitemap that Astro generates with the git-based last-modified dates.
  3. Request indexing - For new sites, use the URL Inspection tool to request indexing of your key pages. Google will find them through the sitemap eventually but this speeds it up.
  4. Monitor - Search Console will show you which queries are bringing people to your site, which pages are indexed, and any issues Google found crawling your site. Check it every now and then.

If you have staging and production on separate domains, only add production to Search Console. Your staging robots.txt should be blocking crawlers anyway.

The pattern

After doing this 4 times, the setup is almost copy-paste between projects. The main things that change are the site URL, Firebase project ID, and the content. The Astro config, Firebase config, trailing slash handling, sitemap, robots.txt, and deployment workflow are basically identical.

If you’re deploying Astro to Firebase Hosting, hopefully this saves you a few hours of figuring it out. It certainly would have saved me some time on sites 1 and 2.