Monday, March 16, 2026

Thunderstorms and Sunshine | A Principal Software Engineer's perspective

By a Principal Engineer, March 2026

It's a mid-March morning. Sunny, a little cloudy, with a pleasant breeze in the air. I sit here with my coffee, thinking about software engineering: where it's been, where it's going, and what AI really means for those of us who have given our lives to this craft. It's a calm morning. But I know what's coming. By afternoon, severe thunderstorms. Tornadoes on watch. Schools closing early.

Tomorrow, though, will be beautiful again.

That's where we are with AI and software engineering. And I think it's worth talking about honestly. Not with hype, not with fear, but with the perspective of someone who has been in this industry long enough to have seen a few storms before.

Where It Started

I was in 8th grade when I fell in love with software engineering. Not through a class, not through a mentor. Through a GW-BASIC program I found printed somewhere, typed up by hand, and ran on a DOS machine.

On the screen, an apple tree appeared. Apples grew, circled, disappeared, grew back. It was visually enigmatic. Beautiful. And the thing that hit me wasn't "I ran a program." It was: I made something beautiful that wouldn't have existed otherwise.

That feeling is one I've been chasing ever since.

I grew up, got my education, started engineering professionally. As a junior, you learn a lot and influence little. You're a small piece of something enormous, and that's humbling in a good way. But every now and then, something happens that cements why you're here. For me, one of those moments was tracking down a bug that had been crashing production servers for months. Objective-C code. An invalid memset, hiding in plain sight. It wasn't easy to find. It took patience, stubbornness, and a refusal to accept "we don't know why it's crashing." When I finally found it and fixed it, the joy was extraordinary. Not because it was glamorous. Because it was hard, and it mattered.

Those early moments shaped how I see software engineering. It is never easy. It is never one-shot. But there is absolute joy in doing things that would otherwise be very difficult to do.

The Drift, and the Return

Over the years, I moved deeper into architecture, design, and management. My teams wrote the code. I shaped the thinking, made the calls, unblocked the hard problems. I'd still jump in when something needed to move faster than anyone else could move it. That instinct never leaves you. But the hands-on building became rarer.

Then, in February 2025, vibe coding arrived.

I'd experimented with AI-assisted coding before. You'd describe a problem, get some code back, it was interesting but inefficient. More of a curiosity than a capability. Vibe coding changed that. For the first time, I could sit down with an idea and build, really build, with AI as a genuine collaborator.

I started with a side project. And within hours, I felt it again. The same joy. The apple tree from 8th grade. The feeling of making something that wouldn't have existed without me.

Here's what I think vibe coding actually unlocked: it removed a specific kind of friction that had accumulated over years. Not the intellectual friction, which is the fun part. The volume friction. The syntax differences between languages. The boilerplate. The fact that you can see exactly what needs to exist, you know it down to your bones, but materializing it takes days. AI collapsed that gap. And in doing so, it gave back the builder's joy to people like me who had drifted away from the code.

We don't write code because writing code is the end goal. We write code to build things. To make something real out of an idea. AI made that more accessible, not less meaningful.

Speed Goes Up. Judgment Matters More.

Let's be clear about something. AI accelerating code output does not mean engineering judgment matters less. It means it matters more.

Software engineering has always been both science and art. Over decades, our industry has accumulated hard-won principles: DRY, SOLID, 12-factor. Not arbitrary rules, but distilled lessons from countless projects that went sideways. A senior engineer doesn't always recite these principles by name. But they feel them. They look at a piece of code and know, almost instinctively, whether it's going to cause pain in six months.

That instinct doesn't transfer to AI. Not yet.

Here's a real example. Recently, I was reviewing AI-generated code that needed to process log data. The AI was stuck trying to read those logs top-to-bottom. It kept grinding away at the problem that way because that's the obvious path. But anyone who looked at how that log was actually structured would immediately know: read it bottom-to-top. That's it. Problem solved. The AI couldn't see that because it was constrained by its framing of the problem. I wasn't.

Another example: a developer on my team was running into issues where an AI was failing to generate detailed outputs for a batch of work items. His instinct was to increase the context window, to just throw more at it. Classic junior mistake, honestly. The right move was the opposite: break the problem into smaller chunks, give AI manageable pieces, and work through them sequentially. The same judgment I'd give a junior engineer, I now give to AI-assisted workflows. The nature of the guidance hasn't changed. The recipient has.

AI is an amplifier. If your thinking is good, it makes you significantly more productive. If your thinking is flawed, it produces flawed code faster. The engineering judgment, the taste, the architecture, the "wait, why are we even approaching it this way" instinct: that is not being replaced. It is being put to work more than ever.

The Factory Problem

Last week I saw a post about building a software engineering factory: end-to-end automated delivery of software. The ideas were genuinely interesting and I respect the ambition. But I keep coming back to something.

I graduated as a civil engineer before becoming a software engineer. In civil engineering, once the design is fixed, it's fixed. You don't change the foundations of a bridge mid-construction. The beauty and the complexity of software engineering both arise from the same source: it is fluid.

Scope creep exists because it can exist. Agile and SAFe came into being because until people actually see software running, they don't fully know what they want. We are great as humans at imagining abstract ideas. We are not great at knowing exactly how we want them realized until we can see and touch them. Two architects in a room will have an animated, sometimes heated discussion about design tradeoffs. That's not a bug. That's the process.

A factory model will deliver something standardized. That's valuable, maybe for 40 or 60% of use cases. But the cases that stand out, the products that resonate, the software that people love, those come from someone having a point of view. A taste. An opinion about what this specific thing should feel and do and be.

AI can build. But AI cannot want. It cannot tell you which vision is worth building. It cannot feel whether something will resonate with your specific audience in your specific context. That judgment, the product mindset, the vision, the "this is what I want it to be and here's why," is irreducibly human. You wouldn't delegate that to a developer without product sense. You wouldn't delegate it to a PM who doesn't understand the users. And you shouldn't delegate it to AI either.

The engineers and leaders who will thrive in this era are the ones who develop that product sensibility alongside their technical depth. Not one or the other. Both.

The Question I Don't Have an Answer To

Here's the thing that keeps me up at night. I want to be honest that I don't have a clean answer.

Taste is earned. You don't graduate as an architect. Nobody hands you the instinct for good system design. You build it over years: through production bugs, through painful refactors, through the memset hunts and the log-reading optimizations and the countless small decisions that add up to something called experience.

If AI absorbs more and more of that work, the debugging, the boilerplate, the small architectural decisions, where does the next generation of principal engineers come from? How do you develop taste without the struggle that forges it?

I don't know. I think there will be mistakes. Some will be caught by the architects and principal engineers who still have the eye for it. Some will make it to production. Companies I deeply respect have already shipped AI-assisted code that caused real revenue impact. That will keep happening for a while.

I believe it will get better. My rough mental model is that we are at step 3 of this evolution. Step 0 was humans writing everything. Step 1 was AI assistance. Step 2 was agentic and vibe coding. Step 3, where we are now, is AI that is starting to develop a kind of flavor. A taste. It doesn't always get it right, but sometimes it does, and you can feel the difference. In five years, I think AI will write reliably good code. In ten years, production-grade code without heavy supervision. But we are not there yet. And in the gap, human judgment is not optional.

Tomorrow Will Be Beautiful

This morning started with sunshine and a pleasant breeze. By afternoon, severe thunderstorms are coming. Tornadoes. Schools are closing early.

AI is going to speed things up. It is going to cause chaos. Teams will shrink. Roles will shift. Some of what we've taken for granted about how software gets built will be upended. That disruption is real, and pretending otherwise helps nobody.

But here's what I'm confident about: software engineering as a profession is going to stay. The number of people will change. The skills that matter will shift. But the need for humans who have taste, who have a vision, who know what good looks like, who can tell the difference between code that will hold up and code that will crumble, that need is not going away. If anything, as AI makes building cheaper and faster, the premium on knowing what to build and why goes up, not down.

The thunderstorms are coming. I'm not going to pretend they won't be disruptive.

But tomorrow is going to be a beautiful day.


If this resonated, I'd love to hear your perspective, especially if you're an engineer or engineering leader navigating these same questions.

Wednesday, June 11, 2025

Human Ingenuity in the Age of Generative AI

Beyond the Prompt: Why Character Outruns Any Model

Beyond Code Generation: The New Engineering Playbook for the AI Era

Illustration: AI brain supported by five pillars—Curiosity, Passion, Grit, Thoroughness, Justice

Picture this: an engineer asks a GenAI assistant to scaffold a data-processing service. The model returns flawless, unit-tested code in minutes—but it silently violates a critical privacy constraint, creating a compliance time-bomb. The tool worked perfectly; judgment failed.

While the industry chases “10× productivity,” code generation is becoming a commodity. Advantage shifts to who frames the problem, tests the idea, and guards quality. That’s a hiring question and an operations question.

As engineering leaders, our mandate is no longer just to build faster—it’s to build wiser.

Over dozens of projects and talent reviews, I’ve distilled five human pillars of engineering excellence. AI can amplify them, but never replace them.

The Five Pillars of the AI-Era Engineer

1. Expansive Curiosity: The "What If" Engine

Curiosity is the trait that refuses to accept the current problem statement as the final word. It's the relentless drive to ask "what if?" until the old roadmap looks small. While an AI can optimize a known path, a curious engineer discovers entirely new destinations. This is the engine of true innovation, not just iteration.

In Practice, This Looks Like:
  • Prototyping a solution for a customer problem that isn't even on the backlog yet.
  • Questioning a long-held architectural assumption that "everyone knows" is true.
  • Spending 10% of their time learning a completely unrelated technology to see what concepts might apply back to their core work.
  • Push the model with exploratory prompts until it surprises you.

2. Applied Thoroughness: The Scrutiny Shield

If Curiosity expands, Thoroughness protects. This is the professional discipline to pressure-test every artifact, especially those generated by AI. Speed without scrutiny is vanity; it builds technical debt and exposes the business to risk. The thorough engineer is a professional skeptic who trusts, but relentlessly verifies. In our example, we were able to identify a junior developer using AI tools to answer deeply technical questions, after being visibly surprised on hearing a term

In Practice, This Looks Like:
  • Writing tests specifically designed to break the "happy path" logic an AI tends to produce.
  • Rejecting an AI-suggested library after investigating its dependency tree and long-term supportability.
  • Manually walking through generated code to validate its logic against business rules, not just its syntax.
  • Pair LLM output with static-analysis + threat-model checkpoints.

3. Enduring Grit: The Breakthrough Engine

AI makes starting things easy. Grit is what finishes them. It's the resilience to turn a dozen cheap, failed iterations into one durable breakthrough. Grit is what sustains an engineer through the messy, unglamorous work of refactoring, integration, and debugging a complex system—the parts of the job that AI often makes worse, not better.

In Practice, This Looks Like:
  • Spending three days tracking down an intermittent bug in a legacy system to ensure a new AI-driven feature is reliable.
  • Championing a necessary platform-wide refactor even when it’s not a "sexy" new feature.
  • Methodically instrumenting and monitoring a new system, refusing to call it "done" until it's proven stable in production.
  • Finish personal side projects despite a calendar full of stand-ups and evenings full of kids classes

4. Purpose-Driven Passion: The "Why" Compass

When the tool of the day pivots tomorrow, passion for the underlying purpose is what keeps a great engineer oriented. This is the connection to the "why" behind the work—the desire to solve the customer's problem or advance the company's mission. Passion provides the context AI lacks, ensuring the right problems are solved with empathy.

In Practice, This Looks Like:
  • An engineer on the "billing" team spending a day with the finance department to understand their workflow firsthand.
  • Choosing to build a simpler feature that perfectly solves a user's core need over a more complex one that's technically interesting.
  • Articulately explaining business trade-offs to stakeholders, demonstrating they own the outcome, not just the code.
  • Link sprint goals to company impact in every demo.

5. Applied Justice: The Conscience in the Machine

An AI is a mirror for the data it's trained on, biases and all. Justice is the active, engineering-led commitment to building fairness into our systems. It moves ethics from a theoretical discussion to a practical discipline. It's about asking "Whom might this harm?" and building safeguards so that as our impact scales, our biases do not. In one of our use-cases, an NLP dashboard cut analytics time by 80% after we patched an auth-hole

In Practice, This Looks Like:
  • Building fairness checks directly into the CI/CD pipeline to flag biased model outputs automatically.
  • Designing systems to use the absolute minimum amount of personally identifiable information (PII) by default.
  • Creating "red team" scenarios to brainstorm and mitigate how a feature could be abused by bad actors.
  • Publish model-risk statements with an executive owner of record.

The Playbook: Building an Org on These Pillars

Identifying these pillars is easy. Building an organization that systematically cultivates them is hard. It requires tangible changes to our operating system.

  1. Hire for Traits, Not Ticks: Design interviews to surface curiosity, grit, fairness. Perfect answers without reasoning trails are red flags.
  2. Upgrade the Career Ladder: Define what “Demonstrates Applied Thoroughness” or “Exhibits Expansive Curiosity” looks like at each level.
  3. Celebrate Process, Not Just Launches: Honour the best debugging story (Grit), the most insightful design-review question (Curiosity), or the smartest decision not to build (Thoroughness).

Conclusion: The Real Work Ahead

Obsessing over efficiency pits humans against machines in a race we can’t win. Our edge is the work AI can’t do: thinking, discerning, persevering, and stewarding with conscience. Build environments where these pillars aren’t just welcomed— they’re demanded.

Reflection prompt: Which pillar needs the most attention in your organisation this quarter—and why? Drop your thoughts in the comments; I reply to every thoughtful note.

Further Reading & References

  • Fowler, Martin. (2023-2024). Articles on Generative AI. martinfowler.com.

    A series of essential, pragmatic articles on GenAI's role. His concepts of the "jagged frontier" of LLM capabilities and the "Semantic Linter" pattern are critical for any leader developing a strategy for AI-assisted development, underscoring the need for Applied Thoroughness.

  • Duckworth, Angela. (2016). Grit: The Power of Passion and Perseverance.

    The definitive book on the science of Grit, providing a deep evidence base for why this trait is so critical for long-term, high achievement in any complex field.

  • Kozyrkov, Cassie. (various). Articles on Decision Intelligence. Towards Data Science.

    As Google's Chief Decision Scientist, Kozyrkov provides crucial clarity on the difference between prediction (machine task) and decision-making (human task). Her work is foundational for understanding the importance of Purpose-Driven Passion in guiding technology.

  • Larson, Will. (2023). "AI is a new reasoning engine." Irrational Exuberance.

    A CTO's perspective on how LLMs are a new tool for thought, highlighting how engineers must use their own Curiosity to direct this powerful new capability effectively.

  • O'Neil, Cathy. (2016). Weapons of Math Destruction.

    The seminal, pre-GenAI book on algorithmic harm. Its lessons are more relevant than ever and provide the bedrock for understanding the need for Applied Justice.

  • Aristotle. (c. 340 BCE). Nicomachean Ethics.

    For those interested in the philosophical roots, this is the foundational text on human excellence that underpins the entire framework of this article.

Author headshot

About the Author

Ram is a Senior Solutions Architect at Sapient, responsible for building high-performing engineering teams that deliver resilient, ethical technology solutions in the AI era.

Monday, April 21, 2025

Rise of the AI‑Assisted Polyglot Developer



“The real power of AI‑powered coding isn’t in replacing developers, but in amplifying our ability to solve problems across languages and domains.”


In recent years, I’ve leaned on large language models (LLMs) to automate small scripting tasks—everything from refactored Bash scripts to Python data pipelines. These quick fun achievements have kept me wondering: What if I could harness these models to accelerate real full‑stack development? Today I lead engineering teams through complex digital transformations. Yet my itch to build never went away. So, when an old MNIT Jaipur classmate asked for help on his startup’s SQL reporting pains, I saw an opportunity to scratch that itch—and explore how large language models (LLMs) could turbo‑charge full‑stack development.. What started as a simple SQL‑reporting utility blossomed into a deep dive into vibe coding, agentic workflows, and a renewed appreciation for human‑AI collaboration.

The Business Challenge

His operations lead needed ad‑hoc reports: log into a relational database, hand‑craft SQL, massage the output into CSVs. Simple—but manual, slow, and error‑prone. Could we embed a “smart” query assistant directly into his Vercel‑hosted dashboard, letting anyone ask natural‑language questions like:

“Show me total sales by region for Q1.”

…and instantly get a table, chart, or CSV?

Diagram of LLM‑powered pipeline as below



Picking the Stack—Pain Points Included

With years of back‑end muscle memory, I initially sketched a Python backend for the LLM logic, with a Next.js front end. But Vercel’s platform pushes you toward a single runtime. After wrestling with mixed‑language builds, I pivoted: all‑in‑JavaScript/TypeScript on Node.js.

The learning curve was steep. I had to:

  1. Discover Vercel’s “v0 agentic mode” and its limitations (free‑tier quotas, usage warnings).

  2. Get up to speed on shadcn/ui and Tailwind CSS for rapid UI prototyping.

  3. Relearn Next.js conventions for server‑side API routes vs. edge functions.

By the end of Week 1, I had a skeletal “Table‑Stakes” project up on GitHub—and a burning question: How fast could I really go if I let an AI agent handle the plumbing?


Enter Vibe Coding

“Vibe coding” loosely describes a workflow where you direct an AI agent—via tools like Claude Sonnet 3.7 or ChatGPT—with short, intent‑based prompts, then iterate on its outputs in situ. It promised to:

  1. Bootstrap boilerplate instantly

  2. Generate utility functions on demand

  3. Suggest best‑practice snippets (e.g., secure DB access)

…all without context‑switching between Stack Overflow, boilerplate repos, and your IDE.

But the ecosystem is messy: cline, Claude Code, Windsurf, Cursor—and each comes with its own CLI quirks, and not all work by default on Windows. No one had written a clear tutorial. So, I leaned on ChatGPT to draft my “starter kit” for vibe coding. I settled on Claude Sonnet 3.7 for agentic coding and VS Code for its rich extension ecosystem



Trials, Errors, and Quotas

A few lessons surfaced immediately:

  1. Agent vs. API billing
    Paid Claude Pro credits don’t apply to Sonnet’s agent API—unexpected costs ticked up quickly.

  2. Syntax habits
    On Windows, early agent runs insisted on && between commands. After a few painful debug loops, I explicitly prompted:

    “Use semicolons to chain shell commands on Windows.”

    This worked mostly, but it still kept chaining them wrongly and fixing, wasting enough cycles as below:


     

  3. Tool limitations
    LLMs excel at fuzzy thinking (drafting logic, naming conventions) but can be weak in identifying the "right" thing to do. When faced with a challenge of compile time issue with a library that required older Node version, it kept trying to use --legacy-peer-deps instead of upgrading the library version, even though when explicitly prompted it was able to see there is a newer version.

  4. Security and Testing as second grade citizens

The agentic mode created decent code but it completely missed adhering to security best practices or adding test cases and all of those had to be added separately. While it was able to add security controls, it had a really tough time adding unit tests. The following diagrams show lack of test coverage when code was generated and relatively severe security findings when I prompted and asked to fix:





 


Productivity Unleashed—With Caveats

With the environment squared away, I completed a working MVP in two nights—from DB‑query endpoint to a polished UI. Agents generated:

  1. Next.js pages with form inputs

  2. API wrappers for PostgreSQL

  3. Client‑side chart components

All punctuated by manual fixes/ prompts for: testing, security, and cross‑platform quirks.

My take: AI agents can nudge you toward a “10× developer” pace—maybe even 100× in raw code generation. But they still forget to:

  1. Sanitize user inputs against security vulnerabilities

  2. Write comprehensive unit and integration tests

  3. Handle edge‑case errors or rate limits gracefully

The following screenshot shows its attempts to fixing the issues when asked to. 




Testing—Is It Cheating?

Generating test stubs from an agent feels like a shortcut:

“Write unit and integration tests for entire codebase using mocks as appropriate.”

Sure, it works. But can you trust tests generated against code that was AI‑assembled? My bias says: write only integration tests that validate end‑to‑end behavior, and manually review critical units. That way, you’re testing the system, not just the agent’s understanding of code it generated.


The Human in the Loop

Throughout, my role was less “coder” and more “orchestrator”:

  1. Prompt architect

  2. Context curator

  3. Quality gatekeeper

I found that my domain expertise—knowing what the UX should feel like, understanding data‑schema trade‑offs, and recognizing security blind spots—was indispensable. The agent unlocked speed, but I guided purpose.


Where We Go from Here

AI‑assisted development is no longer science fiction. Its very much real, and very much in its infancy. Yet:

  1. Non‑technical users still face too many moving parts to trust these tools in production.

  2. Standardization (e.g., LSP‑style protocols for agents) is needed to bridge the “haves” and “have‑nots.”

  3. Community knowledge (deep tutorials, case studies) lags behind hype‑cycle content.

  4. Technical Domain knowledge will keep playing crucial role, e.g. difference in npm's dependency management vs maven's, or lack of Java like libraries like spring-data in nodejs will cause confusion until the ecosystems align

  5. Fast evolving landscape: I started with a known debt of not using MCP. But in the two-three weeks since I started coding and now, significant changes have happened. Firebase Studio has been launched that makes it much easier, GPT-4.1 has been launched and is arguably better than Claude 3.7. Finally, Claude has published best practices for agentic coding.

Still, I’m pretty excited. If a backend‑focused engineer like me with increasingly higher management experience can become a lightweight frontend dev in a weekend, it shouldn't be very long until well‑tuned agent by our side could help us achieve very large productivity gains


Getting Started Yourself

  1. Choose your agent: Claude Sonnet 3.7, ChatGPT Plugins, or GitHub Copilot.

  2. Set up a single‑language stack on your hosting provider (e.g., Node.js on Vercel / v0, or Firebase).

  3. Iterate with prompts—refine your instructions as you learn the agent’s quirks.

  4. Guardrails first: add linters, input validators, and integration tests early.

  5. Share your learnings: we need more deep‑dive tutorials, not just YouTube shorts or tweets.


Agents won’t replace us—but they will empower us to tackle more ambitious problems, faster. Embrace the rise of the AI‑assisted polyglot developer, and let’s build the future, one prompt at a time.

Check out the full project on GitHub:
https://github.com/AbhiramDwivedi/table-stakes


Wednesday, August 12, 2020

How to crack AWS Solutions Architect Pro Certification

A few weeks ago I cleared my AWS Certified Solutions Architect Professional Certification, in first attempt, with a clear pass. Today, we are going to talk about what I did, how I did, mistakes I made and how I would have done it, if I had the foresight I have of hindsight.

The usual disclaimer, AWS CSA Pro is a tough exam. I have 17 years of experience in industry, I hold multiple certifications, and this is the first time that I got reminded of college, of taking exams for classes that I did not understand anything of! Unless you really need it, I would recommend, to NOT do it. 

If you're still reading, aka determined to take the certification, congratulations on taking the first, hardest step and good luck getting it!

The AWS CSA Pro exam is a three hour exam, with no scheduled breaks and has 75 multiple choice questions. The exam can be taken online or onsite. To me, one of the toughest thing about exam was navigating through the maze like stories built on simple concepts. English is purportedly my second language, and if it is yours too, I would recommend taking the extra 30 minutes when scheduling - which I did not. I had questions where all answers looked correct, or all looked wrong, until I read them a couple of times and narrowed down the similarities and differences.

In knowledge perspective, the basic knowledge is not significantly different than AWS CSA Associate, but devil does lie in detail here - CSA Pro is all about details. I did my CSAA in 2017 and a lot of things have changed in the dynamic cloud landscape in these three years, including the material covered in certification. Don't repeat my mistake of thinking that you know AWS because a) You are certified, b) Have hands-on experience with *some* services, c) Have been following announcements. AWS / cloud is a fast changing landscape and knowledge becomes stale earlier that you might imagine.

I spent over six months preparing for certification, although that was broken across multiple sprints. Don't do that. All you need is two months of dedicated study.

I have had a fantastic experience with acloud.guru for CSA Associate and I still recommend that for CSAA, but I would NOT recommend same for CSA Pro. The course in itself is good for practical knowledge, it talks about real life experiences and expectation from a Professional Architect, but it is unable to do justice for course preparation. It genuinely recommends various whitepapers, but then, the whole idea of buying a course is to ease up the pain of going through dry documentation. If you must buy acloud.guru, buy it for knowledge, not for certification. Although if you do, take a look at all the referenced ReInvent videos and whitepapers. The number of whitepapers you'd have to read is enormous, and I wouldn't be able to link them all up here. One video that I particularly liked is Advanced VPC Design.

My enterprise supports udemy subscription and I used Stéphane Maarek's course for my primary study. This course is pretty good. I would recommend taking this course. Unlike acloud.guru that you can usually speed up to 1.5x if not 2x, this course is heavy and really needs to be taken up at 1x speed. Plan to go through the video course twice. I really liked that Stephane brought attention to multiple items. I also loved that he summarizes some sections - and I hope he adds similar summaries to other sections too. The course is quite keen, and it draws attention to minor'ish details that are important for exam, eg cost vs speed of various AWS Elastic Block Store options. Feel free to pause and repeat as many times as you need to, and take a break if it becomes heavy and focus becomes challenge. The course covers various details that may pass by in a jiffy, but have questions on exam. Reminder, the amount of services and their details covered in exam is humongous. Also note that this course is classified as "summary" by some sites and is indeed one of the shorter ones. This does not cover Console login or any other practicals, and is completely theory based. Which brings me to another course I used intermittently.

A colleague recommended me of Zeal Vora's course. I looked at this course, primarily because I was not getting enough confidence. This is a very detailed course and Zeal does a fantastic job of showing how to use various services. I would strongly recommend this course for 201 Deep Dives on various services for on-the-job work. However, apart from the fact that this is too long, I think this does not do justice to either bringing attention to key concepts for exam, or explaining higher level skills needed for Architect Professional on the job. If you could have two courses, check specific chapters in this course that you have trouble understanding. e.g. I come from Development background and networking is my weak area so I spent time on Zeal Vora's course as well as couple of reInvent videos (re)understanding AWS networking concepts.

Tip: Udemy runs promotions where courses are available at significant discounts. Sign up for emails, and wait till they run promotions to buy course, if you're not in a hurry.

One strategy that worked for me during the exam, was grouping the answers in categories and then identifying correct strategy within each category, to arrive at overall 2 or 3 or 4 correct answers from many more.

Finally, one last thing that helped me with practicing questions is Jon Bonso's sample questions on Tutorials DoJo. I had some technical issues with lost answers, but other than that, I liked it. Gives you a wide enough exposure to look at overlooked items during preparation. e.g., I had missed understanding "networkMode" in Fargate, and looking at practice exams, I was able to go back, read up and understand them.

In summary:
- Have a wide exposure and real knowledge before you choose to take up the certification
- Use Stéphane Maarek's course for understanding
- Try out Jon Bonso's sample questions
- Remember, it is not going to be easy. Don't be harsh on yourself when preparation becomes hard

References:

Monday, May 11, 2020

GitOps your Elastic Beanstalk environment properties

In today's blog, I am going to cover how to setup environment variables for Elastic Beanstalk, with focus on a GitOps approach. AWS prefers to call these as environment properties.

Enterprise applications depend on multiple configuration values that change across different environments. In the very pragmatic approach of  Build Once Deploy Many an application bundle is created once and deployed to multiple environments. Such configuration is strictly stored outside of application code, as per Twelve Factor Principles.

For an Elastic Beanstalk application, there are different ways to provide this configuration :

  1. The most crude way to setup these configurations is to login to EB console and manually add them. This is well explained in AWS Documentation and I am not going to cover that here. Given that this is a manual approach, it is error prone and not scalable.
  2. A far better approach is to commit such configuration in version control using configuration files. This requires using the super powerful ebextensions and is explained in AWS Documentation with an example of .ebextensions/options.config. If you're new to setting these, I'd suggest staying away from detailed documentation, which can be pretty confusing. Few things to note here 
    1. The config file must be a valid YAML. I learned it the hard way, and now use my favorite linter, http://www.yamllint.com/ to validate. Given the structure of this config file, its easy to make mistake in formatting, and the documentation does not mention that this is a YAML.
    2. The weird-looking option_settings is a way to define environment variables and is explained in a not so intuitive way in AWS Documentation
    3. Namespace is important when defining anything in this config file. For environment variables, the namespace is aws:elasticbeanstalk:application:environment
    4. This config file can reference AWS CloudFormation! This is a big deal. This implies that you can reference any other resources created earlier in your CFN stack. For example, when your database got created, it might have added credentials to secrets manager, and that can be referenced through this config file. It is possible to reference Secrets Manager through code as well, and frameworks such as Spring make it a breeze, but what such frameworks cannot do is reference your CFN stack.
    5. The config file can be a full fledged CFN file. For example look at this sample provided by AWS. Apparently CFN references can either be pseudo references, or the ones created by this config file itself. You could always reference CFN params from other stacks, but you'd probably need to know the stack ID and devoid of pre-defined naming convention, it could be a pain to solve. We are not going to do that today.
    6. Finally note the back-tick when referencing CFN. You miss it and it would start failing!
But, what if you wanted to access some custom parameters that are not an output of CloudFormation? How would you access such custom parameters? 

Lets take a step back. Where would you even keep your custom parameters?

AWS provides a valuable resource, AWS SSM Parameter Store, to store such custom parameters. Once defined, they may be consistently referenced by diverse applications in your account, such as Lambda and Elastic Beanstalk. We version control these and deploy them to our account.

So, how do we reference parameters stored in Parameter Store, in our application? Turns out, Elastic Beanstalk does not have a well documented way to define an environment variable referencing to SSM Parameters. You could try to set it using ebextension hooks, but that is a rabbit hole which burned over two days of mine, and still did not work. A couple of folks have exported variables using these hooks and used them in EB application. That did not work for me. 

What does work, however, is our good old friend, Cloudformation. CFN allows referencing SSM values. With that in mind, our config file can be easily modified to reference SSM parameters, such as in the snippet below.

Let me explain this tiny snippet a little more. At line 3, we are defining the custom parameter, with a name of "CUSTOM_SSM_PARAM" that refers to a SSM parameter of same name -- and a unique version number. Note that, as of writing this blog, CFN does not support using LATEST version (duh!) of parameter, but relies on a specific version to be specified. If you fail to specify the version number, an error will be thrown. Also note that while String type of SSM parameters can be referenced, Secure String parameters cannot be accessed. This limitation is documented here.

A final note, if you look for this environment variable after setting it up through ebextension, you'd see that the value on UI console still shows up '{{resolve:ssm:CUSTOM_SSM_PARAM:VERSION}}' and not the value for it. However, in the application perspective, this would be resolved just fine, implying this is not a one time static binding, but rather a truly dynamic reference to SSM.

Saturday, May 4, 2019

DevOps 101






DevOps 101


So, you've heard of the term DevOps and are curious, why are people going bananas over it?! What really is DevOps, is there an industry standard definition for it? Does it relate to Agile? Is it based on principles like Agile? My teams are already having a hard time delivering their best, why do I have to hire a DevOps person now?

All very valid questions. And guess what? I will have some answers here and some more, later.

Lets begin with the difficult part. There is no universal definition of DevOps, there is no DevOps Manifesto, unlike Agile and mostly when you talk to people, each will have similar but varying definition for it. In fact in the organization that I consult for, there are 172 Groups with name "DevOps" in them, under varying management! So, does it mean nobody can say what DevOps is? No, not really, that wouldn't be true. There are broad principles associated with DevOps practices and culture and these 172 groups would fit somewhere in the spectrum.

But wait.. culture? Did I just say culture, what has DevOps got to do with culture? Isn't it about using those work-in-progress fancy tools that evolve so fast that nobody can keep up with? Well, my esteemed reader, it is so much more than tools. It is about practices and culture and more. But I am getting ahead of myself here.

The term "DevOps: in itself comprises of "Developers" – people who write code and "Operations" – people who keep that code or the infrastructure under it running across environments, essentially implying close collaboration between these folks. When we discuss about some of the practices, probably in a future blog, we'd see how this collaboration gets reflected in various DevOps activities. Generally speaking, DevOps implies use of engineering practices that enable quicker delivery of well tested, good quality code on a robust production environment with significant automation tied in to the process.

With that high level my definition out, lets have a quick chat around some more details:

§  A brief History


Patrick Debois, a Belgian consultant is credited for coining the term DevOps, implying collaboration between developers and operations. Apparently, the term was first used for DevOps Days 2009 conference in Belgium. The idea of DevOps formed and spread like a wildfire, with coming together of various industry veterans sharing their learning and passion. In Velocity conference 2009, John Allspaw and Paul Hammond presented "10+ Deploys Per Day : Dev and Ops Cooperation at Flickr", link in references, which kind of started shaking the world of how deployments were looked at. The time to market has since been shrinking and based on a 2016 report Amazon deploys code to prod every 11.7 seconds on average.

This feat of continuous deployment is achieved through various architectural and engineering choices that have to be made at the start of application development. No wonder, you'd hear stories of new age companies more in DevOps space. However, it would be factually incorrect to assume that DevOps can be applied only for greenfield projects or only new / smaller companies. Most of the biggest organizations across various sectors, including highly controlled sectors such as federal government are adopting DevOps practices for better profitability / competitiveness or driving innovation faster.

You might wonder, why deploy faster? Great question, a really really good topic for a future blog 


§  What are DevOps practices?

How do we say a team is adopting DevOps, what do they do when they do DevOps? At a 10,000 ft perspective, this applies to using CALMS:


§  C for Culture

      • DevOps aims to establish motivated teams with shared pride, ownership and responsibility of product, that work with a growth mindset.

§  A for Automation

      • Automation is a cornerstone of the DevOps movement and facilitates collaboration. Automating tasks such as testing, configuration and deployment frees people up to focus on other valuable activities and reduces the chance of human error.

§  L for Lean

      • Team members are able to visualize work in progress (WIP), limit batch sizes and manage queue lengths. Again, we depend on our partners from Agile community to help with this

§  M for Measurement

      • DevOps teams measure a lot - from performance of delivery pipeline itself, to application and infrastructure health. This includes things like CPU/ memory monitoring, JVM monitoring or Change Lead Time. The Four Key Metrics, which is now "Adopt" section of ThoughtWorks radar, as name suggests are key metrics for DevOps measurement itself.

§  S for Share

      • Share Success, Failure, Feedback - between and across the teams and members

§  How do I learn DevOps

Ok, all that mumbo - jumbo is good. Now, where do I start learning DevOps?

I will give you three paths :
    1. Or, wait for more blogs 
    2. Or, look at this learning path 

I know this was a bad joke section. Lets move back on serious stuff :)

§  DevOps Thought Leaders

Fortunately, there are many folks in DevOps who really love to share their awesome work. Some folks that I follow are listed below. By no means this is not an exhaustive list, just the ones I follow







 











   




James Turnbull


Chris Riley


Kelsey Hightower


Sean Hull



 








References:
https://devops.com/the-origins-of-devops-whats-in-a-name/
https://newrelic.com/devops/what-is-devops
https://www.devopsdays.org/about/
https://techbeacon.com/devops/10-companies-killing-it-devops
https://docs.microsoft.com/en-us/azure/devops/learn/what-is-devops-culture
https://martinfowler.com/bliki/DevOpsCulture.html
https://whatis.techtarget.com/definition/CALMS
https://www.scaledagileframework.com/devops/
https://www.thoughtworks.com/radar/techniques/four-key-metrics
https://medium.com/@fabiojose/devops-kpi-in-practice-chapter-2-change-lead-time-and-volume-9e80ac7ca54
https://www.agilealliance.org/glossary/lead-time
https://github.com/kamranahmedse/developer-roadmap
https://sweetcode.io/top-10-thought-leaders-devops/