Kamal on Akamai/Linode vs. AWS ECS Fargate for Sporadic Ruby on Rails Apps

There’s a version of platform strategy that looks beautifully simple on a whiteboard. You pick a PaaS everyone trusts, pay the monthly bill, ship code, and sleep at night. Then one day the vendor quietly shifts focus to “sustainability engineering” and you realize the platform isn’t getting the love it used to. Sound familiar?

That’s exactly where I found myself with Heroku. I’ve been running Ruby on Rails apps there for years (solid, predictable, zero-ops bliss). But the writing’s on the wall. So I did what a lot of you are probably doing right now: I started moving some workloads to Akamai/Linode using Kamal (the fantastic container deployment tool from the 37signals/Basecamp crew). It’s been working great. Dead simple Docker-based deploys, automatic HTTPS, and I could even switch to SQLite for the lighter apps because…well, local disk just works.

But then the question hit me (and I know it’s hitting a bunch of you too): For apps that don’t have constant traffic (just occasional bursts or truly sporadic usage), is there something better than keeping persistent VPS instances running 24/7? Specifically, would AWS ECS on Fargate give me better cost efficiency and resiliency, or am I just trading one set of headaches for another?

I spent the last few weeks running real workloads both ways, measuring actual bills, cold-start times, operational pain, and SQLite realities. Here’s the no-BS strategic breakdown I wish I’d had when I started this journey.

The Setup That Got Me Here

Most of my Rails apps are classic web plus background jobs. They don’t need 24/7 heavy lifting (just reliable responses when someone actually visits). Heroku made scaling to zero feel effortless back in the day. Kamal on Linode/Akamai gave me the same “deploy and forget” vibe but with full control and lower costs. I run Puma behind kamal-proxy (which handles Let’s Encrypt automatically on Linode), rely on automated nightly VPS backups for SQLite durability, and Kamal handles zero-downtime deploys like a champ.

Fargate promised serverless container magic: pay only for what you use, auto-scale, multi-AZ resilience out of the box. Sounds perfect for sporadic traffic, right? Not so fast.

Architecture Reality Check

With Kamal on a Linode/Akamai VPS you get a full Linux VM. The container runs as a long-lived process. SQLite lives right on the local NVMe SSD (zero latency, zero extra cost, and durability from automated nightly VPS backups). The whole thing is always warm so cold starts aren’t a thing. Deployments are as easy as running kamal deploy and you’re done.

ECS Fargate is container orchestration without the servers. You define a task (CPU/memory combo), point an Application Load Balancer at it, and ECS handles the rest. Tasks spin up on demand and die when scaled down. Networking is awsvpc mode (each task gets its own ENI). Sounds elegant.

Except SQLite hates ephemeral storage. The free 20 GB temp disk disappears the moment the task stops. So you’re forced into Amazon EFS (network filesystem) or Litestream-to-S3 replication. EFS adds latency and file-locking drama for concurrent writes. Suddenly your “simple” SQLite app isn’t so simple anymore.

The Cost Numbers That Actually Matter (April 2026 pricing)

I ran the math on a minimal Rails app (0.5 vCPU, 1 GB RAM, roughly 500 MB to 1 GB Docker image) that sees light, bursty traffic.

Akamai/Linode + Kamal (shared CPU Nanode):

  • 1 GB instance: $5/mo
  • 2 GB: $12/mo
  • 4 GB (plenty for most small Rails apps): $24/mo
  • Optional NodeBalancer: $10/mo
  • Transfer is pooled and generous (no nickel-and-diming until you blow past it).

Multiple apps can share one VPS. Real monthly spend for a low-traffic service? Often under $15.

AWS ECS Fargate (us-east-1, always-on single task):

  • 0.5 vCPU plus 1 GB memory: roughly $18/mo just for compute
  • Application Load Balancer base fee: roughly $16.50/mo plus LCU charges (even low traffic adds up)
  • CloudWatch, ECR, public IPv4, logging: another $5 to $15/mo easy
  • Total: $40 to $55/mo for one always-running service

If you try to scale to zero with scheduled tasks or custom metrics? You can shave some cost, but ALB health checks keep waking things up, and cold starts become part of the user experience. Fargate Spot saves money but introduces interruption risk (fine for batch jobs, sketchy for web requests).

Bottom line: for truly sporadic apps that still need to feel instant, the VPS route wins on pure dollars. Often 2–3× cheaper.

Resiliency: Where Fargate Can Shine (But at What Cost?)

Let’s be honest, a Linode/Akamai single VPS is a single point of failure. Host reboot, AZ outage, disk hiccup…you’re down until it comes back. Kamal makes recovery fast, and automated nightly VPS backups keep data safe, but it’s not multi-AZ magic.

Fargate gives you that out of the box. Tasks can span AZs, unhealthy ones get auto-replaced, ALB health checks handle the rest. No OS patching. Built-in IAM, Secrets Manager, CloudWatch integration. If you’re already living in AWS, the operational comfort is real.

The trade-off? Cold starts. Even with optimized images (SOCI lazy loading, tiny base images, zstd compression), Rails boot plus image pull on a sporadic app can easily add 30 to 90 seconds of latency for the first visitor after quiet periods. Not ideal for user-facing apps.

SQLite Changes my Database Approach

This is the part nobody talks about enough. On VPS plus Kamal, SQLite is native, fast, and boring (in the best way). On Fargate you’re either paying for EFS or engineering around it. Both add cost and complexity that defeat the “serverless” promise for simple apps.

There are questions about what other extensions you’ve relied on before (e.g. pgvector) so don’t race to SQLite without doublechecking your requirements. I’ve got a couple of apps that are pretty deeply embedded with Postgres-specific tooling that I need to likely just move to a shared PG instance.

My Recommendation (And the Pattern I’m Standardizing On)

For apps without persistent heavy usage, stick with (or expand) Akamai/Linode VPS instances running Kamal plus SQLite. For me, the triggers are:

  • Dramatically lower total cost
  • Simpler operations and faster deploys
  • Warm starts and zero cold-start pain
  • The exact “deploy and forget” feeling I loved about old-school Heroku

Use Fargate only when you need true multi-AZ HA, you’re already deep in AWS, or traffic patterns become predictable enough that auto-scaling shines.

I even tested running Kamal directly to AWS EC2 (lots of instance types are wicked cheap and fast) if I want to stay in the Kamal ecosystem while adding a bit more flexibility to extend my app into other AWS-native services.

The Repeatable Workflow I’m Using Now

  1. Profile real CPU/memory/traffic on a representative app
  2. Start with the smallest Linode Nanode that fits
  3. Rely on automated nightly VPS backups for SQLite durability
  4. Keep Kamal config boring and consistent across apps
  5. Only prototype Fargate if the resiliency requirements justify the cost and complexity

Simplicity plus consistency FTW. I’m a LazyOps type of person so this is definitely my speed.

If you’re in the same boat (Heroku fatigue, Kamal success, and now wondering about Fargate), drop your traffic patterns or resource numbers in the comments. I’m happy to run a more precise cost model for your specific case.

Hope this saves you some late-night spreadsheet sessions. Happy deploying, and may your bills stay low and your apps stay boringly reliable.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.