TL;DR
- Your safest path is to become the person who ships more, with higher quality, by using AI well.
- Audit your work, automate the repeatable parts, double down on skills that compound: systems, product sense, data, and collaboration.
- Measure your uplift and make it visible. Fear recedes when you can point to outcomes.
Why this post
Anxiety is normal in a platform shift. The goal here is not hype, but a practical plan to reduce risk and increase your value as a developer. You will leave with concrete steps you can start this week.
Quick reality check: what AI is and is not good at (as of 2024)
AI is strong at:
- Rewriting, refactoring, and boilerplate
- Summarizing and transforming text, code, configs
- Suggesting test cases, edge cases, and scaffolding
- Generating first drafts for docs, tickets, SQL, and scripts
AI is weak at:
- Ambiguous, under-specified product work and tradeoffs
- Integrations with messy real systems and flaky environments
- Accountability, ownership, and coordination across teams
- Non-obvious debugging that requires deep system context
The career move: use AI to compress the easy 60% so you can spend more time on the valuable 40%.
A simple framework: Audit → Adapt → Amplify
- Audit your tasks
- For two weeks, track how you spend time. Label each block as one of:
- Repeatable: clear input and output; example exists; low ambiguity
- Judgment: stakeholder negotiation, tradeoffs, product shaping
- Glue: coordination, reviews, runbooks, on-call, incident response
- Goal: find 2–3 repeatables per week to delegate to an AI-assisted flow.
- Adapt your workflow
- Create lightweight AI Standard Operating Procedures (SOPs) for the repeatables.
- Add guardrails: review checklists, tests, and small evaluation scripts.
- Amplify and measure
- Capture before/after metrics: time to first commit, PR cycle time, defect escapes, on-call MTTR.
- Socialize wins in weekly notes or a short demo; make your leverage visible.
Step 1: a 2-week task audit (fast setup)
Create a simple log. Do not over-engineer it.
# task_log.md
2026-04-10
- 09:30–10:15: Fix flaky test in payments e2e (Glue)
- 10:30–11:10: Add pagination to orders API (Repeatable)
- 11:20–12:00: Spec review with PM on refund edge cases (Judgment)
...
At the end of each day, mark candidates for AI help. Look for patterns like:
- Transformations: JSON ↔ YAML, schema updates, code migrations
- Boilerplate: DTOs, mappers, validation, basic tests
- Docs and tickets: summaries, acceptance criteria, changelogs
Step 2: convert repeatables into AI-assisted SOPs
Start with two SOPs this week. Keep them short.
SOP: AI pair programming checklist
- Write a 1–2 sentence task intent and constraints (language, framework, style guides).
- Ask for a plan first; then request code.
- Always request tests and a quick rationale.
- Run locally, then ask AI to explain any failing test.
- Add a short docstring or comment explaining tradeoffs before committing.
Prompt skeleton you can reuse:
You are my coding pair. Task: <one sentence>. Context: <repo, stack, constraints>.
Deliver: minimal diff + tests + brief rationale (bullets). Follow project conventions: <lint/test/build>.
Before code: propose a short plan and risks. Then provide the diff.
SOP: daily doc accelerator
- Paste standup notes; ask AI for a concise summary with blockers and next steps.
- Generate a draft PR description from the diff; you finalize the risks section.
Guardrail: fast review protocol
- Never merge AI output without:
- Tests passing locally
- One manual path test in a real environment
- Static checks: lint, type check, basic security scan
- A quick self-review: what did I not verify?
Step 3: measure uplift with lightweight metrics
Pick 2 metrics and track weekly:
- TTFC: time to first commit for a new task
- PR cycle time: open to merge
- Defect escape rate: bugs reported post-merge in first 7 days
- Doc latency: time from code done to docs updated
Simple habit: include a one-line outcome in your weekly note:
- Example: Reduced PR cycle from 2.1d to 1.3d by templating test scaffolds with AI.
A 30–60–90 day plan
First 30 days
- Baseline your metrics and complete the 2-week audit.
- Ship 2 SOPs: code scaffolding and docs.
- Share a short internal post: what worked, pitfalls, before/after.
Days 31–60
- Own an AI-augmented initiative: migrations, test coverage, config cleanup, docs revamp.
- Add evaluation to one flow: golden tests or seed prompts to check outputs.
- Pair with a teammate to spread the playbook.
Days 61–90
- Ship one user-facing improvement accelerated by AI (faster iteration cycles, better tests).
- Document your AI usage policy and guardrails for the team.
- Present results with metrics; ask to mentor two engineers.
Skills that age well (your anti-automation moat)
Technical
- Systems and debugging: tracing across services, perf profiles, memory leaks
- Data modeling and interfaces: schemas, contracts, versioning, migrations
- Testing and reliability: property tests, load tests, observability, on-call craft
- Security and privacy: least privilege, secrets, PII handling, threat modeling
- AI integration literacy: retrieval, context windows, prompt patterns, evals, cost control
Product and collaboration
- Product sense: scoping, slicing, and making tradeoffs that fit constraints
- Communication: crisp proposals, visuals, risk callouts, stakeholder updates
- Domain knowledge: how value is created in your specific business
Build visible career capital
- Keep an impact log: date, problem, action, measurable outcome.
- Maintain a running playbook: your SOPs, prompts, checklists, and pitfalls.
- Contribute small internal tools or scripts that save team hours; announce them.
- Do one brown-bag talk per quarter on lessons learned.
Calm tactics when anxiety spikes
- Control the controllables: your learning plan, your shipping cadence, your portfolio.
- Tighten your news diet: check summaries weekly, not hourly.
- Convert fear into experiments: one small experiment per week beats doomscrolling.
- Talk to your manager: propose an AI uplift goal tied to team metrics.
- Maintain runway: 3–6 months of expenses, updated resume, and a current portfolio.
If layoffs happen: a fast rebound playbook
Week 1
- Ship a focused portfolio update: 3 projects with outcomes and numbers.
- Prepare a one-paragraph story on how you use AI to deliver faster and safer.
- Set a daily routine: 2 applications, 1 networking reach-out, 1 skill rep.
Weeks 2–4
- Build a small public artifact weekly: a gist, CLI, plugin, or demo.
- Target roles where your domain knowledge is valued and AI literacy is a plus, not a replacement.
Interview lens
- Emphasize ownership, debugging depth, and collaboration under ambiguity.
- Have one story where AI saved days; another where you caught an AI-induced bug.
One-page summary you can print
- Audit 2 weeks of tasks; tag Repeatable, Judgment, Glue
- Create 2 AI SOPs; add guardrails and fast evals
- Track 2 metrics; share a weekly one-line outcome
- Invest weekly in systems, data, reliability, security, and AI literacy
- Make impact visible: logs, demos, docs, internal talks
- Reduce noise; run small experiments; maintain financial and portfolio runway
You do not need to out-code a model. You need to out-ship most humans by pairing with one, safely. That is both realistic and within your control today.