Migrating a codebase from one language or framework to another is grunt work that eats weeks. Converting a Flask app to FastAPI, JavaScript to TypeScript, or Express routes to Hono — the patterns are mechanical, but there are enough edge cases to make manual conversion painful. AI handles this surprisingly well: feed it source code, tell it the target, and it produces working output with the right idioms. This tutorial builds a CLI migration tool that processes entire directories, file by file, using Claude through EzAI API.
By the end you'll have a tool that reads a source directory, converts each file to the target language or framework, writes the output alongside the original, and generates a migration report showing what changed. The full source is under 200 lines of Python.
Why AI for Code Migration?
Traditional migration tools rely on AST transforms — they parse your code into a tree, apply rewrite rules, and spit out the result. They work great for simple renames and import swaps, but fall apart on anything that requires semantic understanding. Renaming app.route to @app.get is trivial; figuring out that a Flask before_request handler should become a FastAPI middleware dependency is not.
LLMs bridge that gap. Claude can look at a Flask view function that manually validates request JSON, and rewrite it as a FastAPI endpoint with Pydantic models — because it understands what the code means, not just what it looks like. The trade-off is cost per token, but for a one-time migration of a few hundred files, it's a rounding error compared to developer hours.
Project Setup
You need Python 3.10+ and an EzAI API key. If you haven't set one up yet, see the getting started guide.
mkdir ai-migrate && cd ai-migrate
pip install anthropic pathspec rich
Three dependencies: anthropic (the SDK, which works with EzAI's compatible endpoint), pathspec for gitignore-style filtering, and rich for terminal output. Nothing else.
The Migration Engine
The core loop is straightforward: walk a directory tree, filter out binary files and ignored paths, send each source file to Claude with a migration prompt, and write the converted output. Here's the full engine:
# migrate.py — AI code migration CLI
import os, sys, json, time
from pathlib import Path
from anthropic import Anthropic
from rich.console import Console
from rich.progress import Progress, SpinnerColumn, TextColumn
import pathspec
console = Console()
client = Anthropic(
base_url="https://ezaiapi.com",
api_key=os.environ["ANTHROPIC_API_KEY"],
)
# File extensions to process per migration type
EXTENSION_MAP = {
"js-to-ts": [".js", ".jsx"],
"flask-to-fastapi": [".py"],
"express-to-hono": [".js", ".ts"],
"react-class-to-hooks": [".jsx", ".tsx"],
"custom": ["*"],
}
def build_prompt(migration_type, source_code, filename):
prompts = {
"js-to-ts": f"""Convert this JavaScript file to TypeScript.
Add proper type annotations for all function parameters, return
types, and variables. Use interfaces for object shapes. Preserve
all logic exactly — only add types.
File: {filename}
```
{source_code}
```
Return ONLY the converted TypeScript code, no explanation.""",
"flask-to-fastapi": f"""Convert this Flask application code to FastAPI.
- Replace @app.route with @app.get/@app.post/etc.
- Convert request.json to Pydantic models
- Replace jsonify() with direct dict returns
- Convert before_request to middleware/dependencies
- Update imports accordingly
File: {filename}
```
{source_code}
```
Return ONLY the converted FastAPI code, no explanation.""",
"express-to-hono": f"""Convert this Express.js code to Hono framework.
- Replace express() with Hono()
- Convert req.body/req.params to c.req patterns
- Replace res.json() with c.json()
- Update middleware syntax
- Use Hono's built-in validator where applicable
File: {filename}
```
{source_code}
```
Return ONLY the converted Hono code, no explanation.""",
}
return prompts.get(migration_type, prompts["js-to-ts"])
Migration pipeline: scan → filter → convert → write → report
File Discovery and Filtering
You don't want to send node_modules or binary files to an LLM. The discovery function respects .gitignore patterns and skips files that aren't relevant to the migration:
def discover_files(src_dir, migration_type):
src = Path(src_dir)
extensions = EXTENSION_MAP.get(migration_type, ["*"])
# Load .gitignore if present
gitignore = src / ".gitignore"
spec = None
if gitignore.exists():
spec = pathspec.PathSpec.from_lines(
"gitwildmatch",
gitignore.read_text().splitlines()
)
files = []
for path in src.rglob("*"):
if not path.is_file():
continue
rel = path.relative_to(src)
# Skip gitignored paths
if spec and spec.match_file(str(rel)):
continue
# Skip common junk directories
parts = rel.parts
if any(p in {"node_modules", ".git", "__pycache__",
"dist", "build", ".venv"} for p in parts):
continue
# Check extension match
if "*" not in extensions and path.suffix not in extensions:
continue
# Skip files over 100KB (likely generated)
if path.stat().st_size > 100_000:
console.print(f" ⏭ Skipping {rel} (too large)", style="dim")
continue
files.append(path)
return sorted(files)
Converting Files with Claude
Each file gets sent as a standalone conversion request. Claude 3.5 Sonnet is a solid pick here — fast enough for batch processing and accurate enough for code transforms. For large files with complex logic, you can swap to Claude 3 Opus via the model parameter:
def convert_file(source_path, src_dir, migration_type,
model="claude-sonnet-4-20250514"):
rel = source_path.relative_to(src_dir)
source_code = source_path.read_text(encoding="utf-8")
if not source_code.strip():
return {"file": str(rel), "status": "skipped", "reason": "empty"}
prompt = build_prompt(migration_type, source_code, str(rel))
try:
response = client.messages.create(
model=model,
max_tokens=4096,
messages=[{"role": "user", "content": prompt}],
)
converted = response.content[0].text
# Strip markdown fences if present
if converted.startswith("```"):
lines = converted.split("\n")
converted = "\n".join(lines[1:-1])
# Compute new extension
new_ext = _target_extension(source_path.suffix, migration_type)
out_name = source_path.stem + ".migrated" + new_ext
out_path = source_path.parent / out_name
out_path.write_text(converted, encoding="utf-8")
return {
"file": str(rel),
"status": "converted",
"output": str(out_path.relative_to(src_dir)),
"tokens_in": response.usage.input_tokens,
"tokens_out": response.usage.output_tokens,
}
except Exception as e:
return {"file": str(rel), "status": "error", "reason": str(e)}
def _target_extension(current_ext, migration_type):
ext_map = {
"js-to-ts": {".js": ".ts", ".jsx": ".tsx"},
"express-to-hono": {".js": ".ts"},
}
return ext_map.get(migration_type, {}).get(current_ext, current_ext)
Notice the .migrated suffix — converted files land next to originals so you can diff them. No originals get overwritten. Run the migration, review the diffs, then swap the files in once you're satisfied.
CLI Entry Point
Wrap it in a clean CLI with argparse. The tool takes a source directory, a migration type, and an optional model override:
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(
description="AI-powered code migration tool"
)
parser.add_argument("src", help="Source directory")
parser.add_argument("--type", required=True,
choices=["js-to-ts", "flask-to-fastapi",
"express-to-hono", "react-class-to-hooks"])
parser.add_argument("--model",
default="claude-sonnet-4-20250514")
parser.add_argument("--dry-run", action="store_true",
help="List files without converting")
args = parser.parse_args()
src_dir = Path(args.src).resolve()
if not src_dir.is_dir():
console.print(f"[red]Error:[/] {src_dir} is not a directory")
sys.exit(1)
files = discover_files(src_dir, args.type)
console.print(
f"[bold]Found {len(files)} files[/] for {args.type} migration"
)
if args.dry_run:
for f in files:
console.print(f" {f.relative_to(src_dir)}")
sys.exit(0)
results = []
total_in, total_out = 0, 0
t0 = time.time()
with Progress(SpinnerColumn(), TextColumn("{task.description}"),
console=console) as progress:
task = progress.add_task("Migrating...", total=len(files))
for f in files:
rel = f.relative_to(src_dir)
progress.update(task, description=f"Converting {rel}")
result = convert_file(f, src_dir, args.type, args.model)
results.append(result)
total_in += result.get("tokens_in", 0)
total_out += result.get("tokens_out", 0)
progress.advance(task)
elapsed = time.time() - t0
# Print summary
converted = [r for r in results if r["status"] == "converted"]
errors = [r for r in results if r["status"] == "error"]
console.print(f"\n[bold green]✓ Migration complete[/]")
console.print(f" Converted: {len(converted)} files")
console.print(f" Errors: {len(errors)} files")
console.print(f" Tokens: {total_in:,} in / {total_out:,} out")
console.print(f" Time: {elapsed:.1f}s")
# Write report
report_path = src_dir / "migration-report.json"
report_path.write_text(json.dumps(results, indent=2))
console.print(f" Report: {report_path}")
Running it looks like this:
# Dry run first — see what'll be processed
python migrate.py ./my-flask-app --type flask-to-fastapi --dry-run
# Run the actual migration
export ANTHROPIC_API_KEY="sk-..."
python migrate.py ./my-flask-app --type flask-to-fastapi
# Use a different model for complex codebases
python migrate.py ./my-express-app --type express-to-hono --model claude-opus-4-20250514
Handling Edge Cases
Three things will trip you up in production migrations:
1. Import resolution. Claude converts individual files, but it doesn't see your whole project graph. If utils.py exports a Flask-specific helper, the converted files will still import it. Fix: add a second pass that scans converted files for stale imports and flags them in the report.
2. Large files. Files over ~8K tokens of source should be chunked or sent to a model with a larger context window. The 100KB filter in discover_files catches the worst offenders, but dense Python can hit token limits at 50KB. Monitor the tokens_in count in your report.
3. Test files. Test suites often use framework-specific fixtures and mocks that need manual review. Consider excluding *_test.py and test_*.py from automated migration and handling them separately.
For teams running large migrations, check the concurrent requests guide to parallelize file conversions. A 200-file migration drops from 40 minutes to under 5 with 10 concurrent workers.
Tracking Costs
The migration report includes token counts per file. To estimate cost before committing to a full run, use --dry-run with a quick token count:
# Quick cost estimate before migrating
import tiktoken
def estimate_cost(files, price_per_1k_in=0.003, price_per_1k_out=0.015):
enc = tiktoken.get_encoding("cl100k_base")
total_tokens = 0
for f in files:
text = f.read_text(encoding="utf-8", errors="ignore")
total_tokens += len(enc.encode(text))
# Assume output ≈ 1.2x input for code migration
cost_in = (total_tokens / 1000) * price_per_1k_in
cost_out = (total_tokens * 1.2 / 1000) * price_per_1k_out
return {"tokens": total_tokens, "est_cost": cost_in + cost_out}
For a typical 50-file Flask app (~2K tokens per file), you're looking at roughly 100K input tokens and 120K output tokens. On EzAI with Sonnet pricing, that's under $2 for the entire migration. Compare that to a developer spending a week on manual conversion and it's not even close.
Track your actual spend on the EzAI dashboard — every API call shows up in real time with per-model breakdowns.
What you've built: a working CLI tool that converts codebases between languages and frameworks using Claude through EzAI API. The tool discovers files, respects gitignore patterns, converts each file individually, writes output alongside originals, and generates a JSON migration report with token costs. Extend it with concurrent processing, custom migration prompts, or a second-pass import resolver to handle larger projects.