% cat /etc/stack.conf
I get asked about my technology choices often enough that it made sense to put them in one place. I’m sharing this for two reasons: first, in hope that someone building a similar system finds it useful — even if just as a starting point to disagree with. Second, if you’re evaluating whether my skills and experience match your requirements — whether you’re looking for hands-on assistance or training — this is the honest inventory.
Development Tools
Every tool choice I make comes down to one question: does it let me ship faster without sacrificing correctness?
My primary IDE is Cursor, augmented with carefully developed rule sets — coding style, naming conventions, architectural patterns, commit hygiene. These rules aren’t decoration; they’re what turns an AI coding assistant from a creative autocomplete into a disciplined team member. For legacy codebases that predate this workflow, my team and I use IntelliJ IDEA with the Claude Code plugin, which gets us about 80% of the way there.
For tasks where code is disposable — prototypes, one-off scripts, static sites — Claude Code and Claude Cowork are the fastest path from idea to deployment. This website, including the AI Twin you can chat with right now, was built entirely in Claude Cowork. So is the website for Rishon. No code reviews needed when the entire artifact ships in one session.
Languages
TypeScript is my default for everything on the backend. Not because it’s trendy — because static type checks catch at compile time what would otherwise blow up at 3 AM in production. The type system is expressive enough to model complex domain logic, and the NPM ecosystem means I rarely build from scratch what someone else has already battle-tested. I’ve developed a specialized methodology for AI-assisted development in TypeScript that leverages the type system as a contract between the human architect and the AI coder — the types become the guardrails.
For user interfaces: React when the product is purely web-based, Flutter/Dart when native mobile or desktop apps are required or on the roadmap. Flutter’s single-codebase-to-all-platforms promise actually delivers in practice, which is more than I can say for most cross-platform frameworks I’ve tried.
For embedded systems and performance-critical components: C++. I have my eye on Rust for future projects in this category — the ownership model is compelling, and the safety guarantees align with where I think systems programming needs to go. That said, I don’t start Rust projects yet: using it effectively requires a well-defined pipeline for AI-assisted development, and I haven’t fleshed mine out. I refuse to write a language without AI assistance in 2026 — I have principles.
Java remains in the rotation for legacy projects and systems where the JVM ecosystem is already deeply entrenched. It’s a workhorse — not exciting, but reliable.
And yes, funny enough, I still remember COBOL, PL/I, dBase, and even a variety of assembly languages. I wouldn’t seriously consider any of them for a new project, but they do make for entertaining war stories.
LLMs — Picking the Right Model for the Job
I don’t believe in a single “best” LLM. Different models dominate different tasks, and knowing which to reach for is half the battle.
Claude Opus 4.6 is my go-to for coding. It handles complex, multi-file refactors across large codebases better than anything else I’ve used. With a 1M token context window, a well-componentized project lets it hold an entire component and its dependencies in working memory — enough to reason about interfaces, side effects, and downstream impact without losing the thread. Its ability to detect and self-correct mistakes during code review is genuinely useful — not a gimmick. Anthropic’s API also supports sandboxed code execution, which means the model can run and validate its own output before handing it back to me. When I run it through Cursor or Claude Code with well-crafted rules, the output quality is consistently high enough that I trust it with production code.
GPT-5.4 wins for file processing and quantitative work — spreadsheets, financial models, document transformations. It handles multi-step workflows across complex file types with a reliability that the other models haven’t matched yet.
Gemini 3.1 Pro earns its place with sheer context capacity: a 2M token window, double what the others offer. When I need to synthesize insights across a large corpus of documents, analyze lengthy video recordings, or process massive datasets where losing context mid-way would be fatal, Gemini is the tool I reach for.
Runtimes & Cloud
For local and on-premises deployments: Node.js. Portable, reliable, and the runtime I know best. The event loop model handles I/O-heavy workloads efficiently without the thread-management overhead of traditional servers.
In the cloud: AWS, preferably serverless. The serverless model eliminates an entire category of operational concerns — no patching, no capacity planning, no paying for idle compute. You write the logic; AWS handles the rest.
My preferred approach is infrastructure-as-code — not just configuration files, but actual code that can execute logic, make decisions, and adapt as it runs. The payoff goes beyond reproducibility: when a deployment fails, I troubleshoot it with the same tools and methods I use for the application itself. No context-switching between “app debugging” and “infra debugging” — it’s all code, all the way down.
The core building blocks: Lambda as the execution engine (every API call, every background job, every event handler) — running Node.js or Java runtimes depending on the project. DynamoDB as the database of choice — unless you genuinely need full SQL with ACID transactions, and you’d be surprised how many systems don’t. S3 for blob storage. CloudFront as CDN. API Gateway for managing APIs in front of Lambdas. SQS for message queues (with Kinesis stepping in when the workload is telemetry or high-throughput event streams). Route 53 for DNS. And a handful of others for specialized purposes.
I have nothing against other clouds. My deepest experience is with AWS, and the serverless ecosystem there is mature enough that switching would need a compelling reason.
Observability & What’s Next
If something breaks in production and no log explains why, it might as well not have happened — except it did, and now you’re guessing. I use CloudWatch for logs, metrics, and event-driven alerting, and Sentry for error tracking and tracing. Sentry’s strength is that it unifies error reporting and distributed tracing across the entire software stack — every client platform, every server component, every Lambda invocation — into a single timeline. The combination gives me both the high-level health dashboard and the ability to drill into a specific request that went sideways.
More recently, I’ve been combining observability with AI-assisted troubleshooting. My workflow: error reports, logs, trace data, and metrics get pulled into Cursor, where the AI agent researches the issue across the codebase, correlates the evidence, and proposes fixes — subject to my approval. In some cases I configure Cursor to pull this data automatically when a pattern triggers, so by the time I look at the problem, a candidate fix is already waiting for review. It’s not a replacement for understanding your system, but it’s a force multiplier when you’re staring at a 2 AM incident.
Looking ahead: I’m working on incorporating formal proofing mechanisms into AI-assisted development and debugging. The idea is to move beyond “the tests pass” toward mathematically verifiable correctness guarantees for critical code paths — especially the ones where AI generated the code in the first place. If we’re going to trust AI to write production systems, we need verification tools that match that trust with rigor.