What is the AWS Modernization case?

iSimplifyMe operates an 80-plus site client portfolio on AWS and is refactoring legacy WordPress on EC2 plus WHM/cPanel into AWS-native serverless (Lambda@Edge plus CloudFront plus DynamoDB plus S3 plus SES). This is an intra-AWS modernization — the customer has been on AWS for years; the migration ascends from IaaS (EC2) to serverless (Lambda) within the same account perimeter under the canonical AWS 7 Rs Refactor pattern. The operational control plane is Apex Architecture, which provides per-tenant observability, AI-augmented support automation, content pipeline, and AEO compliance scoring across the entire portfolio. 9 of 80-plus sites are live on the new substrate as of May 2026, 70-plus remain in the pipeline. Per-site EC2 plus cPanel baseline averages roughly $90 per month us-west-2; serverless modernized per-site cost approaches zero idle for low-traffic sites.

The setup

iSimplifyMe operates a portfolio of 80-plus client websites spanning healthcare practices, legal firms, real-estate data products, specialty trades, automotive editorial brands, eldercare directories, and SaaS products. The vast majority of these sites run on WordPress (PHP/MySQL) hosted on AWS EC2 instances in us-west-2 (Oregon)with WHM/cPanel as the orchestration layer.While the customer has been on AWS for years, the EC2-plus-cPanel pattern carries the operational weight of on-prem hosting plus AWS-specific drawbacks: idle compute floor, shared-tenancy OS coupling, WHM/cPanel licensing overhead, no CloudWatch-native observability per site, no BAA-eligibility.

The mandate: ascend the AWS stack from IaaS (EC2) to serverless (Lambda@Edge) without abandoning the AWS account perimeter the customer has operated within for years.

Why this is harder than it looks

1
Problem 01

Idle EC2 compute floor

EC2 instances run 24/7 regardless of per-site traffic. For an 80-site fleet where most sites get fewer than 1,000 requests per day, the agency pays for compute that sits idle the majority of every hour. The us-west-2 EC2 baseline averages roughly $90 per month across the shared-tenancy instance — flat across Dec 2025 through April 2026 because the bulk of legacy sites are still on it. Refactor to Lambda@Edge collapses this floor to pay-per-request.

2
Problem 02

WHM/cPanel licensing and OS coupling

WHM/cPanel licensing adds a per-server cost layer that scales with instance size, not site value. Worse, all 80-plus sites share the underlying EC2 OS image — a kernel update, PHP version bump, or cPanel maintenance event hits every site simultaneously. The fix is one CloudFront distribution per site plus an ACM cert per site plus an isolated Lambda@Edge runtime — no shared OS, no per-server licensing, no fleet-wide maintenance events.

3
Problem 03

No CloudWatch-native observability per site

cPanel logs live in local rotation on the EC2 host, not in CloudWatch. Identifying issues across 80-plus sites requires manual inspection per host. Apex Architecture replaces this with per-tenant CloudWatch structured logs, persistent uptime time series in DynamoDB, response-time rollups, and surfaced charts in a per-client dashboard.

4
Problem 04

No BAA-eligibility for regulated verticals

EC2 with cPanel is not BAA-eligible — cPanel is third-party software with its own data flows. Healthcare and legal clients increasingly require IAM least-privilege, encryption-at-rest, audit logs, and BAA-eligible services that cPanel-orchestrated EC2 cannot cleanly provide. AWS-native serverless (Lambda@Edge, DynamoDB, S3, SES, Bedrock) is BAA-eligible end-to-end — opens regulated-vertical client engagements that cPanel hosting locked out.

Per-site refactor pattern (AWS 7 Rs → Refactor)

Legacy (AWS EC2 + WHM/cPanel)Refactor target (AWS-native serverless)
WordPress (PHP) + MySQL on EC2Next.js 15 (App Router) + TypeScript on Lambda@Edge
EC2 instance(s) in us-west-2 + WHM/cPanelAWS Lambda@Edge + CloudFront (per-site distribution, ACM cert)
Per-site Apache vhost on shared EC2Per-site CloudFront distribution (isolated per client)
WordPress media library on EC2 diskAmazon S3 + CloudFront origin
MySQL on EC2 (wp_options + wp_posts)Amazon DynamoDB single-table (composite-key isolation)
WordPress contact formApex API endpoint + DynamoDB + Amazon SES
WordPress editor + Yoast SEOApex content pipeline (MDX) + AEO scanner
cPanel cron / WP-Cron on EC2AWS Lambda + SST Crons (5-min uptime probes, daily Lighthouse, monthly PDF)
cPanel local log rotationAmazon CloudWatch Logs (structured per-tenant logging)
cPanel email / shared SMTPAmazon SES with verified domain identity
24/7 EC2 compute floorPay-per-request Lambda@Edge (near-zero idle)

The application framework also modernizes: WordPress (PHP/jQuery) becomes Next.js 15 with React Server Components plus TypeScript plus Incremental Static Regeneration — mapping natively onto Lambda@Edge runtime. Plugin-driven security patching, PHP version-bump risk, and the shrinking WordPress PHP developer pool all retire as the refactor lands.

Build log

  1. Phase 0

    Apex control plane live (March 2026)

    The multi-tenant operational substrate that makes per-site refactor economically viable at agency scale. 9 functional modules per tenant (Bot Analytics, Search Performance, Site Health, Support + Changes, Content Pipeline, AEO Scores, Billing, SOW Generator, Admin). 11 Cloudflare Workers streaming bot-hit events into DynamoDB. SST Crons firing uptime probes every 5 minutes against all tenant domains. Production at apex.isimplifyme.com.

  2. Phase 1

    First wave of client refactors (April–May 2026)

    9 client sites refactored to AWS-native serverless to date — validation of the migration pattern across healthcare, legal, real-estate, consumer, editorial, and SaaS verticals. Each per-site refactor follows the same shape: Next.js scaffold + content port + DynamoDB schema + CloudFront distribution + ACM cert + SES verified domain + AEO compliance pass. ~1–2 weeks per site.

  3. Phase 1.5

    AI-augmented support pipeline live (April 2026)

    SQS-driven Lambda executor calls Amazon Bedrock (Anthropic Claude Haiku) to analyze Tier-1 support tickets (text changes, contact-info updates, bug reports), generates a code patch, opens a GitHub pull request on the client repository, and notifies the admin via Slack + Amazon SES. Tier-1 tickets auto-resolve end-to-end with human approval gate on PR merge. Production ticket volume small to date (4 tickets recorded) — automation infrastructure validated end-to-end.

  4. Phase 2

    Remaining 70+ sites (in progress)

    70-plus sites remain on legacy AWS EC2 plus WHM/cPanel in us-west-2. Each subsequent refactor deploys onto the Apex control plane that already exists — engineering capacity isn’t the bottleneck; coordination with each client on content cutover and DNS switchover windows is. Target cadence: a few sites per month.

  5. Phase 3

    Decommission legacy EC2 (post-cutover)

    Once the 80-plus site fleet is fully refactored, the legacy us-west-2 EC2 instance + WHM/cPanel reseller account is decommissioned. The ~$90/month EC2 compute floor disappears, replaced by per-site Lambda@Edge invocation costs that scale with real traffic. AWS Migration Competency application (Year 2 milestone) sources its evidence from this case study.

Built and operated, not delivered.

Most "cloud migration" engagements end with a runbook for the client’s internal team to execute.

This one is a multi-year refactor program iSimplifyMe runs against its own 80-plus site portfolio in production. 9 sites refactored to date, 70-plus in the pipeline, the Apex control plane firing uptime probes every 5 minutes across the fleet. When Phase 3 lands the legacy us-west-2 EC2 instance gets decommissioned, and the AWS Migration Competency application sources its evidence from this case study.

Intra-AWS.Refactor pattern.AWS-native.

Frequently asked questions

How is this different from a typical AWS migration project?

A typical AWS migration moves workloads INTO AWS from on-prem or another cloud. This isn’t that. The customer has been on AWS for years via EC2 plus WHM/cPanel — modernization ascends the AWS stack from IaaS (EC2) to serverless (Lambda@Edge) within the same AWS account. Same IAM perimeter, same CloudTrail, same Cost Explorer, same operational team. The architectural pattern changes; the AWS relationship does not.

Why refactor instead of rehost or replatform?

Rehost (lift-and-shift) keeps you on the EC2 cost model — you still pay 24/7 compute floor regardless of traffic. Replatform (e.g., EC2 to ECS) eliminates server management but keeps you on always-on compute. For an 80-site portfolio where most sites get fewer than 1,000 requests per day, only Refactor (per the AWS 7 Rs framework) collapses the idle compute cost. Lambda@Edge charges per request, so the cost floor approaches zero when sites are idle and scales linearly with real traffic.

What runs the migrated sites operationally?

iSimplifyMe. Each refactored site deploys as a Next.js application on AWS Lambda@Edge plus CloudFront via SST (Pulumi). The operational control plane — Apex Architecture — provides per-tenant observability (uptime, response time, bot/AI-crawler traffic, AEO compliance scores), AI-augmented support automation (Bedrock Claude Haiku analyzes Tier-1 tickets and opens GitHub pull requests), and content pipeline (Markdown-to-MDX with AEO validation). The client doesn’t need an internal data or DevOps team to keep it running.

Why does this matter for AWS Partner positioning?

It’s the canonical AWS modernization narrative — a multi-year AWS customer ascending from IaaS to serverless within the same account perimeter. AWS BD recognizes the 7 Rs Refactor pattern immediately. The case study sets up future AWS Migration Competency application (Year 2 milestone). The cost-elimination story (24/7 EC2 compute → pay-per-request Lambda@Edge) plus the compliance posture upgrade (IAM least-privilege, encryption-at-rest, CloudWatch audit logs, BAA-eligibility for healthcare-adjacent clients) translates directly into AWS Partner co-marketing and Competency surfaces.

How long does the per-site refactor take?

Roughly 1–2 weeks per site once the pattern is in place, depending on content complexity. The Apex control plane (the multi-tenant substrate that observes and operates the refactored sites) was the heavier lift — that’s been live in production since March 2026, and each subsequent site refactor just deploys onto it. The bottleneck on the 70-plus remaining sites isn’t engineering capacity; it’s coordination with each client on content cutover and DNS switchover windows.

Get Started

Discuss an AWS modernization for your fleet

If you’re running WordPress on EC2 (with or without WHM/cPanel) and the per-site idle compute cost is eating margins on your low-traffic sites, we can talk through the Refactor pattern for your specific fleet.

  • Discovery call30 min · Free · No deck — actual mechanics
  • Per-site timeline1–2 weeks per site once the pattern is in place
  • iSM-operatedAWS account, monitoring, AEO compliance — all on us