Wednesday, June 11, 2025

Human Ingenuity in the Age of Generative AI

Beyond the Prompt: Why Character Outruns Any Model

Beyond Code Generation: The New Engineering Playbook for the AI Era

Illustration: AI brain supported by five pillars—Curiosity, Passion, Grit, Thoroughness, Justice

Picture this: an engineer asks a GenAI assistant to scaffold a data-processing service. The model returns flawless, unit-tested code in minutes—but it silently violates a critical privacy constraint, creating a compliance time-bomb. The tool worked perfectly; judgment failed.

While the industry chases “10× productivity,” code generation is becoming a commodity. Advantage shifts to who frames the problem, tests the idea, and guards quality. That’s a hiring question and an operations question.

As engineering leaders, our mandate is no longer just to build faster—it’s to build wiser.

Over dozens of projects and talent reviews, I’ve distilled five human pillars of engineering excellence. AI can amplify them, but never replace them.

The Five Pillars of the AI-Era Engineer

1. Expansive Curiosity: The "What If" Engine

Curiosity is the trait that refuses to accept the current problem statement as the final word. It's the relentless drive to ask "what if?" until the old roadmap looks small. While an AI can optimize a known path, a curious engineer discovers entirely new destinations. This is the engine of true innovation, not just iteration.

In Practice, This Looks Like:
  • Prototyping a solution for a customer problem that isn't even on the backlog yet.
  • Questioning a long-held architectural assumption that "everyone knows" is true.
  • Spending 10% of their time learning a completely unrelated technology to see what concepts might apply back to their core work.
  • Push the model with exploratory prompts until it surprises you.

2. Applied Thoroughness: The Scrutiny Shield

If Curiosity expands, Thoroughness protects. This is the professional discipline to pressure-test every artifact, especially those generated by AI. Speed without scrutiny is vanity; it builds technical debt and exposes the business to risk. The thorough engineer is a professional skeptic who trusts, but relentlessly verifies. In our example, we were able to identify a junior developer using AI tools to answer deeply technical questions, after being visibly surprised on hearing a term

In Practice, This Looks Like:
  • Writing tests specifically designed to break the "happy path" logic an AI tends to produce.
  • Rejecting an AI-suggested library after investigating its dependency tree and long-term supportability.
  • Manually walking through generated code to validate its logic against business rules, not just its syntax.
  • Pair LLM output with static-analysis + threat-model checkpoints.

3. Enduring Grit: The Breakthrough Engine

AI makes starting things easy. Grit is what finishes them. It's the resilience to turn a dozen cheap, failed iterations into one durable breakthrough. Grit is what sustains an engineer through the messy, unglamorous work of refactoring, integration, and debugging a complex system—the parts of the job that AI often makes worse, not better.

In Practice, This Looks Like:
  • Spending three days tracking down an intermittent bug in a legacy system to ensure a new AI-driven feature is reliable.
  • Championing a necessary platform-wide refactor even when it’s not a "sexy" new feature.
  • Methodically instrumenting and monitoring a new system, refusing to call it "done" until it's proven stable in production.
  • Finish personal side projects despite a calendar full of stand-ups and evenings full of kids classes

4. Purpose-Driven Passion: The "Why" Compass

When the tool of the day pivots tomorrow, passion for the underlying purpose is what keeps a great engineer oriented. This is the connection to the "why" behind the work—the desire to solve the customer's problem or advance the company's mission. Passion provides the context AI lacks, ensuring the right problems are solved with empathy.

In Practice, This Looks Like:
  • An engineer on the "billing" team spending a day with the finance department to understand their workflow firsthand.
  • Choosing to build a simpler feature that perfectly solves a user's core need over a more complex one that's technically interesting.
  • Articulately explaining business trade-offs to stakeholders, demonstrating they own the outcome, not just the code.
  • Link sprint goals to company impact in every demo.

5. Applied Justice: The Conscience in the Machine

An AI is a mirror for the data it's trained on, biases and all. Justice is the active, engineering-led commitment to building fairness into our systems. It moves ethics from a theoretical discussion to a practical discipline. It's about asking "Whom might this harm?" and building safeguards so that as our impact scales, our biases do not. In one of our use-cases, an NLP dashboard cut analytics time by 80% after we patched an auth-hole

In Practice, This Looks Like:
  • Building fairness checks directly into the CI/CD pipeline to flag biased model outputs automatically.
  • Designing systems to use the absolute minimum amount of personally identifiable information (PII) by default.
  • Creating "red team" scenarios to brainstorm and mitigate how a feature could be abused by bad actors.
  • Publish model-risk statements with an executive owner of record.

The Playbook: Building an Org on These Pillars

Identifying these pillars is easy. Building an organization that systematically cultivates them is hard. It requires tangible changes to our operating system.

  1. Hire for Traits, Not Ticks: Design interviews to surface curiosity, grit, fairness. Perfect answers without reasoning trails are red flags.
  2. Upgrade the Career Ladder: Define what “Demonstrates Applied Thoroughness” or “Exhibits Expansive Curiosity” looks like at each level.
  3. Celebrate Process, Not Just Launches: Honour the best debugging story (Grit), the most insightful design-review question (Curiosity), or the smartest decision not to build (Thoroughness).

Conclusion: The Real Work Ahead

Obsessing over efficiency pits humans against machines in a race we can’t win. Our edge is the work AI can’t do: thinking, discerning, persevering, and stewarding with conscience. Build environments where these pillars aren’t just welcomed— they’re demanded.

Reflection prompt: Which pillar needs the most attention in your organisation this quarter—and why? Drop your thoughts in the comments; I reply to every thoughtful note.

Further Reading & References

  • Fowler, Martin. (2023-2024). Articles on Generative AI. martinfowler.com.

    A series of essential, pragmatic articles on GenAI's role. His concepts of the "jagged frontier" of LLM capabilities and the "Semantic Linter" pattern are critical for any leader developing a strategy for AI-assisted development, underscoring the need for Applied Thoroughness.

  • Duckworth, Angela. (2016). Grit: The Power of Passion and Perseverance.

    The definitive book on the science of Grit, providing a deep evidence base for why this trait is so critical for long-term, high achievement in any complex field.

  • Kozyrkov, Cassie. (various). Articles on Decision Intelligence. Towards Data Science.

    As Google's Chief Decision Scientist, Kozyrkov provides crucial clarity on the difference between prediction (machine task) and decision-making (human task). Her work is foundational for understanding the importance of Purpose-Driven Passion in guiding technology.

  • Larson, Will. (2023). "AI is a new reasoning engine." Irrational Exuberance.

    A CTO's perspective on how LLMs are a new tool for thought, highlighting how engineers must use their own Curiosity to direct this powerful new capability effectively.

  • O'Neil, Cathy. (2016). Weapons of Math Destruction.

    The seminal, pre-GenAI book on algorithmic harm. Its lessons are more relevant than ever and provide the bedrock for understanding the need for Applied Justice.

  • Aristotle. (c. 340 BCE). Nicomachean Ethics.

    For those interested in the philosophical roots, this is the foundational text on human excellence that underpins the entire framework of this article.

Author headshot

About the Author

Ram is a Senior Solutions Architect at Sapient, responsible for building high-performing engineering teams that deliver resilient, ethical technology solutions in the AI era.

Monday, April 21, 2025

Rise of the AI‑Assisted Polyglot Developer



“The real power of AI‑powered coding isn’t in replacing developers, but in amplifying our ability to solve problems across languages and domains.”


In recent years, I’ve leaned on large language models (LLMs) to automate small scripting tasks—everything from refactored Bash scripts to Python data pipelines. These quick fun achievements have kept me wondering: What if I could harness these models to accelerate real full‑stack development? Today I lead engineering teams through complex digital transformations. Yet my itch to build never went away. So, when an old MNIT Jaipur classmate asked for help on his startup’s SQL reporting pains, I saw an opportunity to scratch that itch—and explore how large language models (LLMs) could turbo‑charge full‑stack development.. What started as a simple SQL‑reporting utility blossomed into a deep dive into vibe coding, agentic workflows, and a renewed appreciation for human‑AI collaboration.

The Business Challenge

His operations lead needed ad‑hoc reports: log into a relational database, hand‑craft SQL, massage the output into CSVs. Simple—but manual, slow, and error‑prone. Could we embed a “smart” query assistant directly into his Vercel‑hosted dashboard, letting anyone ask natural‑language questions like:

“Show me total sales by region for Q1.”

…and instantly get a table, chart, or CSV?

Diagram of LLM‑powered pipeline as below



Picking the Stack—Pain Points Included

With years of back‑end muscle memory, I initially sketched a Python backend for the LLM logic, with a Next.js front end. But Vercel’s platform pushes you toward a single runtime. After wrestling with mixed‑language builds, I pivoted: all‑in‑JavaScript/TypeScript on Node.js.

The learning curve was steep. I had to:

  1. Discover Vercel’s “v0 agentic mode” and its limitations (free‑tier quotas, usage warnings).

  2. Get up to speed on shadcn/ui and Tailwind CSS for rapid UI prototyping.

  3. Relearn Next.js conventions for server‑side API routes vs. edge functions.

By the end of Week 1, I had a skeletal “Table‑Stakes” project up on GitHub—and a burning question: How fast could I really go if I let an AI agent handle the plumbing?


Enter Vibe Coding

“Vibe coding” loosely describes a workflow where you direct an AI agent—via tools like Claude Sonnet 3.7 or ChatGPT—with short, intent‑based prompts, then iterate on its outputs in situ. It promised to:

  1. Bootstrap boilerplate instantly

  2. Generate utility functions on demand

  3. Suggest best‑practice snippets (e.g., secure DB access)

…all without context‑switching between Stack Overflow, boilerplate repos, and your IDE.

But the ecosystem is messy: cline, Claude Code, Windsurf, Cursor—and each comes with its own CLI quirks, and not all work by default on Windows. No one had written a clear tutorial. So, I leaned on ChatGPT to draft my “starter kit” for vibe coding. I settled on Claude Sonnet 3.7 for agentic coding and VS Code for its rich extension ecosystem



Trials, Errors, and Quotas

A few lessons surfaced immediately:

  1. Agent vs. API billing
    Paid Claude Pro credits don’t apply to Sonnet’s agent API—unexpected costs ticked up quickly.

  2. Syntax habits
    On Windows, early agent runs insisted on && between commands. After a few painful debug loops, I explicitly prompted:

    “Use semicolons to chain shell commands on Windows.”

    This worked mostly, but it still kept chaining them wrongly and fixing, wasting enough cycles as below:


     

  3. Tool limitations
    LLMs excel at fuzzy thinking (drafting logic, naming conventions) but can be weak in identifying the "right" thing to do. When faced with a challenge of compile time issue with a library that required older Node version, it kept trying to use --legacy-peer-deps instead of upgrading the library version, even though when explicitly prompted it was able to see there is a newer version.

  4. Security and Testing as second grade citizens

The agentic mode created decent code but it completely missed adhering to security best practices or adding test cases and all of those had to be added separately. While it was able to add security controls, it had a really tough time adding unit tests. The following diagrams show lack of test coverage when code was generated and relatively severe security findings when I prompted and asked to fix:





 


Productivity Unleashed—With Caveats

With the environment squared away, I completed a working MVP in two nights—from DB‑query endpoint to a polished UI. Agents generated:

  1. Next.js pages with form inputs

  2. API wrappers for PostgreSQL

  3. Client‑side chart components

All punctuated by manual fixes/ prompts for: testing, security, and cross‑platform quirks.

My take: AI agents can nudge you toward a “10× developer” pace—maybe even 100× in raw code generation. But they still forget to:

  1. Sanitize user inputs against security vulnerabilities

  2. Write comprehensive unit and integration tests

  3. Handle edge‑case errors or rate limits gracefully

The following screenshot shows its attempts to fixing the issues when asked to. 




Testing—Is It Cheating?

Generating test stubs from an agent feels like a shortcut:

“Write unit and integration tests for entire codebase using mocks as appropriate.”

Sure, it works. But can you trust tests generated against code that was AI‑assembled? My bias says: write only integration tests that validate end‑to‑end behavior, and manually review critical units. That way, you’re testing the system, not just the agent’s understanding of code it generated.


The Human in the Loop

Throughout, my role was less “coder” and more “orchestrator”:

  1. Prompt architect

  2. Context curator

  3. Quality gatekeeper

I found that my domain expertise—knowing what the UX should feel like, understanding data‑schema trade‑offs, and recognizing security blind spots—was indispensable. The agent unlocked speed, but I guided purpose.


Where We Go from Here

AI‑assisted development is no longer science fiction. Its very much real, and very much in its infancy. Yet:

  1. Non‑technical users still face too many moving parts to trust these tools in production.

  2. Standardization (e.g., LSP‑style protocols for agents) is needed to bridge the “haves” and “have‑nots.”

  3. Community knowledge (deep tutorials, case studies) lags behind hype‑cycle content.

  4. Technical Domain knowledge will keep playing crucial role, e.g. difference in npm's dependency management vs maven's, or lack of Java like libraries like spring-data in nodejs will cause confusion until the ecosystems align

  5. Fast evolving landscape: I started with a known debt of not using MCP. But in the two-three weeks since I started coding and now, significant changes have happened. Firebase Studio has been launched that makes it much easier, GPT-4.1 has been launched and is arguably better than Claude 3.7. Finally, Claude has published best practices for agentic coding.

Still, I’m pretty excited. If a backend‑focused engineer like me with increasingly higher management experience can become a lightweight frontend dev in a weekend, it shouldn't be very long until well‑tuned agent by our side could help us achieve very large productivity gains


Getting Started Yourself

  1. Choose your agent: Claude Sonnet 3.7, ChatGPT Plugins, or GitHub Copilot.

  2. Set up a single‑language stack on your hosting provider (e.g., Node.js on Vercel / v0, or Firebase).

  3. Iterate with prompts—refine your instructions as you learn the agent’s quirks.

  4. Guardrails first: add linters, input validators, and integration tests early.

  5. Share your learnings: we need more deep‑dive tutorials, not just YouTube shorts or tweets.


Agents won’t replace us—but they will empower us to tackle more ambitious problems, faster. Embrace the rise of the AI‑assisted polyglot developer, and let’s build the future, one prompt at a time.

Check out the full project on GitHub:
https://github.com/AbhiramDwivedi/table-stakes