Wednesday, June 11, 2025

Human Ingenuity in the Age of Generative AI

Beyond the Prompt: Why Character Outruns Any Model

Beyond Code Generation: The New Engineering Playbook for the AI Era

Illustration: AI brain supported by five pillars—Curiosity, Passion, Grit, Thoroughness, Justice

Picture this: an engineer asks a GenAI assistant to scaffold a data-processing service. The model returns flawless, unit-tested code in minutes—but it silently violates a critical privacy constraint, creating a compliance time-bomb. The tool worked perfectly; judgment failed.

While the industry chases “10× productivity,” code generation is becoming a commodity. Advantage shifts to who frames the problem, tests the idea, and guards quality. That’s a hiring question and an operations question.

As engineering leaders, our mandate is no longer just to build faster—it’s to build wiser.

Over dozens of projects and talent reviews, I’ve distilled five human pillars of engineering excellence. AI can amplify them, but never replace them.

The Five Pillars of the AI-Era Engineer

1. Expansive Curiosity: The "What If" Engine

Curiosity is the trait that refuses to accept the current problem statement as the final word. It's the relentless drive to ask "what if?" until the old roadmap looks small. While an AI can optimize a known path, a curious engineer discovers entirely new destinations. This is the engine of true innovation, not just iteration.

In Practice, This Looks Like:
  • Prototyping a solution for a customer problem that isn't even on the backlog yet.
  • Questioning a long-held architectural assumption that "everyone knows" is true.
  • Spending 10% of their time learning a completely unrelated technology to see what concepts might apply back to their core work.
  • Push the model with exploratory prompts until it surprises you.

2. Applied Thoroughness: The Scrutiny Shield

If Curiosity expands, Thoroughness protects. This is the professional discipline to pressure-test every artifact, especially those generated by AI. Speed without scrutiny is vanity; it builds technical debt and exposes the business to risk. The thorough engineer is a professional skeptic who trusts, but relentlessly verifies. In our example, we were able to identify a junior developer using AI tools to answer deeply technical questions, after being visibly surprised on hearing a term

In Practice, This Looks Like:
  • Writing tests specifically designed to break the "happy path" logic an AI tends to produce.
  • Rejecting an AI-suggested library after investigating its dependency tree and long-term supportability.
  • Manually walking through generated code to validate its logic against business rules, not just its syntax.
  • Pair LLM output with static-analysis + threat-model checkpoints.

3. Enduring Grit: The Breakthrough Engine

AI makes starting things easy. Grit is what finishes them. It's the resilience to turn a dozen cheap, failed iterations into one durable breakthrough. Grit is what sustains an engineer through the messy, unglamorous work of refactoring, integration, and debugging a complex system—the parts of the job that AI often makes worse, not better.

In Practice, This Looks Like:
  • Spending three days tracking down an intermittent bug in a legacy system to ensure a new AI-driven feature is reliable.
  • Championing a necessary platform-wide refactor even when it’s not a "sexy" new feature.
  • Methodically instrumenting and monitoring a new system, refusing to call it "done" until it's proven stable in production.
  • Finish personal side projects despite a calendar full of stand-ups and evenings full of kids classes

4. Purpose-Driven Passion: The "Why" Compass

When the tool of the day pivots tomorrow, passion for the underlying purpose is what keeps a great engineer oriented. This is the connection to the "why" behind the work—the desire to solve the customer's problem or advance the company's mission. Passion provides the context AI lacks, ensuring the right problems are solved with empathy.

In Practice, This Looks Like:
  • An engineer on the "billing" team spending a day with the finance department to understand their workflow firsthand.
  • Choosing to build a simpler feature that perfectly solves a user's core need over a more complex one that's technically interesting.
  • Articulately explaining business trade-offs to stakeholders, demonstrating they own the outcome, not just the code.
  • Link sprint goals to company impact in every demo.

5. Applied Justice: The Conscience in the Machine

An AI is a mirror for the data it's trained on, biases and all. Justice is the active, engineering-led commitment to building fairness into our systems. It moves ethics from a theoretical discussion to a practical discipline. It's about asking "Whom might this harm?" and building safeguards so that as our impact scales, our biases do not. In one of our use-cases, an NLP dashboard cut analytics time by 80% after we patched an auth-hole

In Practice, This Looks Like:
  • Building fairness checks directly into the CI/CD pipeline to flag biased model outputs automatically.
  • Designing systems to use the absolute minimum amount of personally identifiable information (PII) by default.
  • Creating "red team" scenarios to brainstorm and mitigate how a feature could be abused by bad actors.
  • Publish model-risk statements with an executive owner of record.

The Playbook: Building an Org on These Pillars

Identifying these pillars is easy. Building an organization that systematically cultivates them is hard. It requires tangible changes to our operating system.

  1. Hire for Traits, Not Ticks: Design interviews to surface curiosity, grit, fairness. Perfect answers without reasoning trails are red flags.
  2. Upgrade the Career Ladder: Define what “Demonstrates Applied Thoroughness” or “Exhibits Expansive Curiosity” looks like at each level.
  3. Celebrate Process, Not Just Launches: Honour the best debugging story (Grit), the most insightful design-review question (Curiosity), or the smartest decision not to build (Thoroughness).

Conclusion: The Real Work Ahead

Obsessing over efficiency pits humans against machines in a race we can’t win. Our edge is the work AI can’t do: thinking, discerning, persevering, and stewarding with conscience. Build environments where these pillars aren’t just welcomed— they’re demanded.

Reflection prompt: Which pillar needs the most attention in your organisation this quarter—and why? Drop your thoughts in the comments; I reply to every thoughtful note.

Further Reading & References

  • Fowler, Martin. (2023-2024). Articles on Generative AI. martinfowler.com.

    A series of essential, pragmatic articles on GenAI's role. His concepts of the "jagged frontier" of LLM capabilities and the "Semantic Linter" pattern are critical for any leader developing a strategy for AI-assisted development, underscoring the need for Applied Thoroughness.

  • Duckworth, Angela. (2016). Grit: The Power of Passion and Perseverance.

    The definitive book on the science of Grit, providing a deep evidence base for why this trait is so critical for long-term, high achievement in any complex field.

  • Kozyrkov, Cassie. (various). Articles on Decision Intelligence. Towards Data Science.

    As Google's Chief Decision Scientist, Kozyrkov provides crucial clarity on the difference between prediction (machine task) and decision-making (human task). Her work is foundational for understanding the importance of Purpose-Driven Passion in guiding technology.

  • Larson, Will. (2023). "AI is a new reasoning engine." Irrational Exuberance.

    A CTO's perspective on how LLMs are a new tool for thought, highlighting how engineers must use their own Curiosity to direct this powerful new capability effectively.

  • O'Neil, Cathy. (2016). Weapons of Math Destruction.

    The seminal, pre-GenAI book on algorithmic harm. Its lessons are more relevant than ever and provide the bedrock for understanding the need for Applied Justice.

  • Aristotle. (c. 340 BCE). Nicomachean Ethics.

    For those interested in the philosophical roots, this is the foundational text on human excellence that underpins the entire framework of this article.

Author headshot

About the Author

Ram is a Senior Solutions Architect at Sapient, responsible for building high-performing engineering teams that deliver resilient, ethical technology solutions in the AI era.

Monday, April 21, 2025

Rise of the AI‑Assisted Polyglot Developer



“The real power of AI‑powered coding isn’t in replacing developers, but in amplifying our ability to solve problems across languages and domains.”


In recent years, I’ve leaned on large language models (LLMs) to automate small scripting tasks—everything from refactored Bash scripts to Python data pipelines. These quick fun achievements have kept me wondering: What if I could harness these models to accelerate real full‑stack development? Today I lead engineering teams through complex digital transformations. Yet my itch to build never went away. So, when an old MNIT Jaipur classmate asked for help on his startup’s SQL reporting pains, I saw an opportunity to scratch that itch—and explore how large language models (LLMs) could turbo‑charge full‑stack development.. What started as a simple SQL‑reporting utility blossomed into a deep dive into vibe coding, agentic workflows, and a renewed appreciation for human‑AI collaboration.

The Business Challenge

His operations lead needed ad‑hoc reports: log into a relational database, hand‑craft SQL, massage the output into CSVs. Simple—but manual, slow, and error‑prone. Could we embed a “smart” query assistant directly into his Vercel‑hosted dashboard, letting anyone ask natural‑language questions like:

“Show me total sales by region for Q1.”

…and instantly get a table, chart, or CSV?

Diagram of LLM‑powered pipeline as below



Picking the Stack—Pain Points Included

With years of back‑end muscle memory, I initially sketched a Python backend for the LLM logic, with a Next.js front end. But Vercel’s platform pushes you toward a single runtime. After wrestling with mixed‑language builds, I pivoted: all‑in‑JavaScript/TypeScript on Node.js.

The learning curve was steep. I had to:

  1. Discover Vercel’s “v0 agentic mode” and its limitations (free‑tier quotas, usage warnings).

  2. Get up to speed on shadcn/ui and Tailwind CSS for rapid UI prototyping.

  3. Relearn Next.js conventions for server‑side API routes vs. edge functions.

By the end of Week 1, I had a skeletal “Table‑Stakes” project up on GitHub—and a burning question: How fast could I really go if I let an AI agent handle the plumbing?


Enter Vibe Coding

“Vibe coding” loosely describes a workflow where you direct an AI agent—via tools like Claude Sonnet 3.7 or ChatGPT—with short, intent‑based prompts, then iterate on its outputs in situ. It promised to:

  1. Bootstrap boilerplate instantly

  2. Generate utility functions on demand

  3. Suggest best‑practice snippets (e.g., secure DB access)

…all without context‑switching between Stack Overflow, boilerplate repos, and your IDE.

But the ecosystem is messy: cline, Claude Code, Windsurf, Cursor—and each comes with its own CLI quirks, and not all work by default on Windows. No one had written a clear tutorial. So, I leaned on ChatGPT to draft my “starter kit” for vibe coding. I settled on Claude Sonnet 3.7 for agentic coding and VS Code for its rich extension ecosystem



Trials, Errors, and Quotas

A few lessons surfaced immediately:

  1. Agent vs. API billing
    Paid Claude Pro credits don’t apply to Sonnet’s agent API—unexpected costs ticked up quickly.

  2. Syntax habits
    On Windows, early agent runs insisted on && between commands. After a few painful debug loops, I explicitly prompted:

    “Use semicolons to chain shell commands on Windows.”

    This worked mostly, but it still kept chaining them wrongly and fixing, wasting enough cycles as below:


     

  3. Tool limitations
    LLMs excel at fuzzy thinking (drafting logic, naming conventions) but can be weak in identifying the "right" thing to do. When faced with a challenge of compile time issue with a library that required older Node version, it kept trying to use --legacy-peer-deps instead of upgrading the library version, even though when explicitly prompted it was able to see there is a newer version.

  4. Security and Testing as second grade citizens

The agentic mode created decent code but it completely missed adhering to security best practices or adding test cases and all of those had to be added separately. While it was able to add security controls, it had a really tough time adding unit tests. The following diagrams show lack of test coverage when code was generated and relatively severe security findings when I prompted and asked to fix:





 


Productivity Unleashed—With Caveats

With the environment squared away, I completed a working MVP in two nights—from DB‑query endpoint to a polished UI. Agents generated:

  1. Next.js pages with form inputs

  2. API wrappers for PostgreSQL

  3. Client‑side chart components

All punctuated by manual fixes/ prompts for: testing, security, and cross‑platform quirks.

My take: AI agents can nudge you toward a “10× developer” pace—maybe even 100× in raw code generation. But they still forget to:

  1. Sanitize user inputs against security vulnerabilities

  2. Write comprehensive unit and integration tests

  3. Handle edge‑case errors or rate limits gracefully

The following screenshot shows its attempts to fixing the issues when asked to. 




Testing—Is It Cheating?

Generating test stubs from an agent feels like a shortcut:

“Write unit and integration tests for entire codebase using mocks as appropriate.”

Sure, it works. But can you trust tests generated against code that was AI‑assembled? My bias says: write only integration tests that validate end‑to‑end behavior, and manually review critical units. That way, you’re testing the system, not just the agent’s understanding of code it generated.


The Human in the Loop

Throughout, my role was less “coder” and more “orchestrator”:

  1. Prompt architect

  2. Context curator

  3. Quality gatekeeper

I found that my domain expertise—knowing what the UX should feel like, understanding data‑schema trade‑offs, and recognizing security blind spots—was indispensable. The agent unlocked speed, but I guided purpose.


Where We Go from Here

AI‑assisted development is no longer science fiction. Its very much real, and very much in its infancy. Yet:

  1. Non‑technical users still face too many moving parts to trust these tools in production.

  2. Standardization (e.g., LSP‑style protocols for agents) is needed to bridge the “haves” and “have‑nots.”

  3. Community knowledge (deep tutorials, case studies) lags behind hype‑cycle content.

  4. Technical Domain knowledge will keep playing crucial role, e.g. difference in npm's dependency management vs maven's, or lack of Java like libraries like spring-data in nodejs will cause confusion until the ecosystems align

  5. Fast evolving landscape: I started with a known debt of not using MCP. But in the two-three weeks since I started coding and now, significant changes have happened. Firebase Studio has been launched that makes it much easier, GPT-4.1 has been launched and is arguably better than Claude 3.7. Finally, Claude has published best practices for agentic coding.

Still, I’m pretty excited. If a backend‑focused engineer like me with increasingly higher management experience can become a lightweight frontend dev in a weekend, it shouldn't be very long until well‑tuned agent by our side could help us achieve very large productivity gains


Getting Started Yourself

  1. Choose your agent: Claude Sonnet 3.7, ChatGPT Plugins, or GitHub Copilot.

  2. Set up a single‑language stack on your hosting provider (e.g., Node.js on Vercel / v0, or Firebase).

  3. Iterate with prompts—refine your instructions as you learn the agent’s quirks.

  4. Guardrails first: add linters, input validators, and integration tests early.

  5. Share your learnings: we need more deep‑dive tutorials, not just YouTube shorts or tweets.


Agents won’t replace us—but they will empower us to tackle more ambitious problems, faster. Embrace the rise of the AI‑assisted polyglot developer, and let’s build the future, one prompt at a time.

Check out the full project on GitHub:
https://github.com/AbhiramDwivedi/table-stakes


Wednesday, August 12, 2020

How to crack AWS Solutions Architect Pro Certification

A few weeks ago I cleared my AWS Certified Solutions Architect Professional Certification, in first attempt, with a clear pass. Today, we are going to talk about what I did, how I did, mistakes I made and how I would have done it, if I had the foresight I have of hindsight.

The usual disclaimer, AWS CSA Pro is a tough exam. I have 17 years of experience in industry, I hold multiple certifications, and this is the first time that I got reminded of college, of taking exams for classes that I did not understand anything of! Unless you really need it, I would recommend, to NOT do it. 

If you're still reading, aka determined to take the certification, congratulations on taking the first, hardest step and good luck getting it!

The AWS CSA Pro exam is a three hour exam, with no scheduled breaks and has 75 multiple choice questions. The exam can be taken online or onsite. To me, one of the toughest thing about exam was navigating through the maze like stories built on simple concepts. English is purportedly my second language, and if it is yours too, I would recommend taking the extra 30 minutes when scheduling - which I did not. I had questions where all answers looked correct, or all looked wrong, until I read them a couple of times and narrowed down the similarities and differences.

In knowledge perspective, the basic knowledge is not significantly different than AWS CSA Associate, but devil does lie in detail here - CSA Pro is all about details. I did my CSAA in 2017 and a lot of things have changed in the dynamic cloud landscape in these three years, including the material covered in certification. Don't repeat my mistake of thinking that you know AWS because a) You are certified, b) Have hands-on experience with *some* services, c) Have been following announcements. AWS / cloud is a fast changing landscape and knowledge becomes stale earlier that you might imagine.

I spent over six months preparing for certification, although that was broken across multiple sprints. Don't do that. All you need is two months of dedicated study.

I have had a fantastic experience with acloud.guru for CSA Associate and I still recommend that for CSAA, but I would NOT recommend same for CSA Pro. The course in itself is good for practical knowledge, it talks about real life experiences and expectation from a Professional Architect, but it is unable to do justice for course preparation. It genuinely recommends various whitepapers, but then, the whole idea of buying a course is to ease up the pain of going through dry documentation. If you must buy acloud.guru, buy it for knowledge, not for certification. Although if you do, take a look at all the referenced ReInvent videos and whitepapers. The number of whitepapers you'd have to read is enormous, and I wouldn't be able to link them all up here. One video that I particularly liked is Advanced VPC Design.

My enterprise supports udemy subscription and I used Stéphane Maarek's course for my primary study. This course is pretty good. I would recommend taking this course. Unlike acloud.guru that you can usually speed up to 1.5x if not 2x, this course is heavy and really needs to be taken up at 1x speed. Plan to go through the video course twice. I really liked that Stephane brought attention to multiple items. I also loved that he summarizes some sections - and I hope he adds similar summaries to other sections too. The course is quite keen, and it draws attention to minor'ish details that are important for exam, eg cost vs speed of various AWS Elastic Block Store options. Feel free to pause and repeat as many times as you need to, and take a break if it becomes heavy and focus becomes challenge. The course covers various details that may pass by in a jiffy, but have questions on exam. Reminder, the amount of services and their details covered in exam is humongous. Also note that this course is classified as "summary" by some sites and is indeed one of the shorter ones. This does not cover Console login or any other practicals, and is completely theory based. Which brings me to another course I used intermittently.

A colleague recommended me of Zeal Vora's course. I looked at this course, primarily because I was not getting enough confidence. This is a very detailed course and Zeal does a fantastic job of showing how to use various services. I would strongly recommend this course for 201 Deep Dives on various services for on-the-job work. However, apart from the fact that this is too long, I think this does not do justice to either bringing attention to key concepts for exam, or explaining higher level skills needed for Architect Professional on the job. If you could have two courses, check specific chapters in this course that you have trouble understanding. e.g. I come from Development background and networking is my weak area so I spent time on Zeal Vora's course as well as couple of reInvent videos (re)understanding AWS networking concepts.

Tip: Udemy runs promotions where courses are available at significant discounts. Sign up for emails, and wait till they run promotions to buy course, if you're not in a hurry.

One strategy that worked for me during the exam, was grouping the answers in categories and then identifying correct strategy within each category, to arrive at overall 2 or 3 or 4 correct answers from many more.

Finally, one last thing that helped me with practicing questions is Jon Bonso's sample questions on Tutorials DoJo. I had some technical issues with lost answers, but other than that, I liked it. Gives you a wide enough exposure to look at overlooked items during preparation. e.g., I had missed understanding "networkMode" in Fargate, and looking at practice exams, I was able to go back, read up and understand them.

In summary:
- Have a wide exposure and real knowledge before you choose to take up the certification
- Use Stéphane Maarek's course for understanding
- Try out Jon Bonso's sample questions
- Remember, it is not going to be easy. Don't be harsh on yourself when preparation becomes hard

References:

Monday, May 11, 2020

GitOps your Elastic Beanstalk environment properties

In today's blog, I am going to cover how to setup environment variables for Elastic Beanstalk, with focus on a GitOps approach. AWS prefers to call these as environment properties.

Enterprise applications depend on multiple configuration values that change across different environments. In the very pragmatic approach of  Build Once Deploy Many an application bundle is created once and deployed to multiple environments. Such configuration is strictly stored outside of application code, as per Twelve Factor Principles.

For an Elastic Beanstalk application, there are different ways to provide this configuration :

  1. The most crude way to setup these configurations is to login to EB console and manually add them. This is well explained in AWS Documentation and I am not going to cover that here. Given that this is a manual approach, it is error prone and not scalable.
  2. A far better approach is to commit such configuration in version control using configuration files. This requires using the super powerful ebextensions and is explained in AWS Documentation with an example of .ebextensions/options.config. If you're new to setting these, I'd suggest staying away from detailed documentation, which can be pretty confusing. Few things to note here 
    1. The config file must be a valid YAML. I learned it the hard way, and now use my favorite linter, http://www.yamllint.com/ to validate. Given the structure of this config file, its easy to make mistake in formatting, and the documentation does not mention that this is a YAML.
    2. The weird-looking option_settings is a way to define environment variables and is explained in a not so intuitive way in AWS Documentation
    3. Namespace is important when defining anything in this config file. For environment variables, the namespace is aws:elasticbeanstalk:application:environment
    4. This config file can reference AWS CloudFormation! This is a big deal. This implies that you can reference any other resources created earlier in your CFN stack. For example, when your database got created, it might have added credentials to secrets manager, and that can be referenced through this config file. It is possible to reference Secrets Manager through code as well, and frameworks such as Spring make it a breeze, but what such frameworks cannot do is reference your CFN stack.
    5. The config file can be a full fledged CFN file. For example look at this sample provided by AWS. Apparently CFN references can either be pseudo references, or the ones created by this config file itself. You could always reference CFN params from other stacks, but you'd probably need to know the stack ID and devoid of pre-defined naming convention, it could be a pain to solve. We are not going to do that today.
    6. Finally note the back-tick when referencing CFN. You miss it and it would start failing!
But, what if you wanted to access some custom parameters that are not an output of CloudFormation? How would you access such custom parameters? 

Lets take a step back. Where would you even keep your custom parameters?

AWS provides a valuable resource, AWS SSM Parameter Store, to store such custom parameters. Once defined, they may be consistently referenced by diverse applications in your account, such as Lambda and Elastic Beanstalk. We version control these and deploy them to our account.

So, how do we reference parameters stored in Parameter Store, in our application? Turns out, Elastic Beanstalk does not have a well documented way to define an environment variable referencing to SSM Parameters. You could try to set it using ebextension hooks, but that is a rabbit hole which burned over two days of mine, and still did not work. A couple of folks have exported variables using these hooks and used them in EB application. That did not work for me. 

What does work, however, is our good old friend, Cloudformation. CFN allows referencing SSM values. With that in mind, our config file can be easily modified to reference SSM parameters, such as in the snippet below.

Let me explain this tiny snippet a little more. At line 3, we are defining the custom parameter, with a name of "CUSTOM_SSM_PARAM" that refers to a SSM parameter of same name -- and a unique version number. Note that, as of writing this blog, CFN does not support using LATEST version (duh!) of parameter, but relies on a specific version to be specified. If you fail to specify the version number, an error will be thrown. Also note that while String type of SSM parameters can be referenced, Secure String parameters cannot be accessed. This limitation is documented here.

A final note, if you look for this environment variable after setting it up through ebextension, you'd see that the value on UI console still shows up '{{resolve:ssm:CUSTOM_SSM_PARAM:VERSION}}' and not the value for it. However, in the application perspective, this would be resolved just fine, implying this is not a one time static binding, but rather a truly dynamic reference to SSM.

Saturday, May 4, 2019

DevOps 101






DevOps 101


So, you've heard of the term DevOps and are curious, why are people going bananas over it?! What really is DevOps, is there an industry standard definition for it? Does it relate to Agile? Is it based on principles like Agile? My teams are already having a hard time delivering their best, why do I have to hire a DevOps person now?

All very valid questions. And guess what? I will have some answers here and some more, later.

Lets begin with the difficult part. There is no universal definition of DevOps, there is no DevOps Manifesto, unlike Agile and mostly when you talk to people, each will have similar but varying definition for it. In fact in the organization that I consult for, there are 172 Groups with name "DevOps" in them, under varying management! So, does it mean nobody can say what DevOps is? No, not really, that wouldn't be true. There are broad principles associated with DevOps practices and culture and these 172 groups would fit somewhere in the spectrum.

But wait.. culture? Did I just say culture, what has DevOps got to do with culture? Isn't it about using those work-in-progress fancy tools that evolve so fast that nobody can keep up with? Well, my esteemed reader, it is so much more than tools. It is about practices and culture and more. But I am getting ahead of myself here.

The term "DevOps: in itself comprises of "Developers" – people who write code and "Operations" – people who keep that code or the infrastructure under it running across environments, essentially implying close collaboration between these folks. When we discuss about some of the practices, probably in a future blog, we'd see how this collaboration gets reflected in various DevOps activities. Generally speaking, DevOps implies use of engineering practices that enable quicker delivery of well tested, good quality code on a robust production environment with significant automation tied in to the process.

With that high level my definition out, lets have a quick chat around some more details:

§  A brief History


Patrick Debois, a Belgian consultant is credited for coining the term DevOps, implying collaboration between developers and operations. Apparently, the term was first used for DevOps Days 2009 conference in Belgium. The idea of DevOps formed and spread like a wildfire, with coming together of various industry veterans sharing their learning and passion. In Velocity conference 2009, John Allspaw and Paul Hammond presented "10+ Deploys Per Day : Dev and Ops Cooperation at Flickr", link in references, which kind of started shaking the world of how deployments were looked at. The time to market has since been shrinking and based on a 2016 report Amazon deploys code to prod every 11.7 seconds on average.

This feat of continuous deployment is achieved through various architectural and engineering choices that have to be made at the start of application development. No wonder, you'd hear stories of new age companies more in DevOps space. However, it would be factually incorrect to assume that DevOps can be applied only for greenfield projects or only new / smaller companies. Most of the biggest organizations across various sectors, including highly controlled sectors such as federal government are adopting DevOps practices for better profitability / competitiveness or driving innovation faster.

You might wonder, why deploy faster? Great question, a really really good topic for a future blog 


§  What are DevOps practices?

How do we say a team is adopting DevOps, what do they do when they do DevOps? At a 10,000 ft perspective, this applies to using CALMS:


§  C for Culture

      • DevOps aims to establish motivated teams with shared pride, ownership and responsibility of product, that work with a growth mindset.

§  A for Automation

      • Automation is a cornerstone of the DevOps movement and facilitates collaboration. Automating tasks such as testing, configuration and deployment frees people up to focus on other valuable activities and reduces the chance of human error.

§  L for Lean

      • Team members are able to visualize work in progress (WIP), limit batch sizes and manage queue lengths. Again, we depend on our partners from Agile community to help with this

§  M for Measurement

      • DevOps teams measure a lot - from performance of delivery pipeline itself, to application and infrastructure health. This includes things like CPU/ memory monitoring, JVM monitoring or Change Lead Time. The Four Key Metrics, which is now "Adopt" section of ThoughtWorks radar, as name suggests are key metrics for DevOps measurement itself.

§  S for Share

      • Share Success, Failure, Feedback - between and across the teams and members

§  How do I learn DevOps

Ok, all that mumbo - jumbo is good. Now, where do I start learning DevOps?

I will give you three paths :
    1. Or, wait for more blogs 
    2. Or, look at this learning path 

I know this was a bad joke section. Lets move back on serious stuff :)

§  DevOps Thought Leaders

Fortunately, there are many folks in DevOps who really love to share their awesome work. Some folks that I follow are listed below. By no means this is not an exhaustive list, just the ones I follow







 











   




James Turnbull


Chris Riley


Kelsey Hightower


Sean Hull



 








References:
https://devops.com/the-origins-of-devops-whats-in-a-name/
https://newrelic.com/devops/what-is-devops
https://www.devopsdays.org/about/
https://techbeacon.com/devops/10-companies-killing-it-devops
https://docs.microsoft.com/en-us/azure/devops/learn/what-is-devops-culture
https://martinfowler.com/bliki/DevOpsCulture.html
https://whatis.techtarget.com/definition/CALMS
https://www.scaledagileframework.com/devops/
https://www.thoughtworks.com/radar/techniques/four-key-metrics
https://medium.com/@fabiojose/devops-kpi-in-practice-chapter-2-change-lead-time-and-volume-9e80ac7ca54
https://www.agilealliance.org/glossary/lead-time
https://github.com/kamranahmedse/developer-roadmap
https://sweetcode.io/top-10-thought-leaders-devops/

Friday, April 5, 2019

Jenkinsfile -- To collocate or not to collocate


To collocate or not to collocate Jenkinsfile

Problem

While building Pipeline-As-Code recently for one of our projects, we were faced with a conundrum; whether to co-locate our Jenkinsfiles with application code, or not. Or, does it even matter?


Default Solution

Our default opinion was to co-locate Jenkinsfile with application code, as that's the whole point - from the same code base we build and deploy code, such as below:



This idea had some advantages. With just a default checkout, Jenkins will be able to find code as well as pipeline to build and deploy it. We use Bitbucket for our development, so this approach comes with the added advantage that we could use multibranch pipelines without any additional effort.






Challenges

However, pretty soon after we started doing this, we ran into some challenge. While DevOps Engineer was modifying Jenkinsfile (remember we're the first ones to build it), and the application developers were simultaneously modifying code base, it resulted into multiple deployments, aka server restarts, while the developers were checking if their code was working in development. At times, this also resulted in broken builds, while DevOps Engineer was trying to fix the pipeline, such as, by adding Sonar scan. We knew, as first people to start using Jenkins Pipeline in enterprise there would be challenges and we chose to live with these challenges.

The application development continued rapidly, and then stabilized, things looked good, deployments were happening to dev and test as expected. However, we felt we were not doing the right thing. But why? We couldn't really put it in words. Until, we wanted to deploy to Acceptance environment, which we thought would be un-eventful. Except that, whenever we modify our pipeline, such as, to build deployment stage for ACPT, we were modifying the code base. And that's when we confirmed our problem, we were violating principle of keeping code and configuration separate, ref https://12factor.net/config. This meant that whenever we have changes to our pipeline, we would have to build the code again, not what we wanted. The code smell was obvious.




Final Approach



By now, we had realized that Jenkinsfile should not really be co-located, but we still wanted developers to be able to build code, run various tests on it, check code quality, and potentially deploy to a dev-like environment themselves. It was a choice between giving more powers to developers versus following sane conventions and keeping production deployments in the hands of people more experienced with doing that.

We eventually decided to have two kinds of Jenkinsfiles:

A usual Jenkinsfile, called just that, that does a build and runs tests on it (and potentially deploys to dev in a future state), used on feature branches
This was configured on Jenkins to run multibranch as well, ensuring that we are able to run those tests for each feature branch (which is created per story),
This sends emails to developers and culprits upon failure


Developers have full control over it and they can change it as needed, eg when our developer was working on a story to fix code Qualityissues, she was running Sonar and Nexus IQ Scans on this, which we generally don't run on feature branches.
Another set of Jenkinsfile, that is kept separate from code, in a different repository, and is used to build AND deploy code, from master This is really our deployment pipeline, that builds, deploys, and performs the whole nine yards of activities needed for taking code to production
This ensures that our pipeline, which is a configuration, remains separate from our code, and can be built and modified, without impacting code base
This sees more changes, especially now, where we are doing this for first time, although it will eventually stabilize too This is a little more controlled - and modified usually by DevOps Engineer only. However, developers have permissions to modify it
Failures to this pipeline should trigger emails to entire team





We did consider having a single Jenkinsfile that builds off of master and feature branches, with different workflows for feature vs master branch. However, we chose not to go this route, given our inexperience with Jenksfile, this would probably make our Jenkinsfile more complex than what we want. We want our developers to be able to understand and modify Jenkinsfile, but we dont want to burden them with too much information, that they usually don't need to dig in.




Looking forward

I believe eventually, we will move to a single Jenkinsfile, which is kept separate than code-base and has different workflows for master, feature branches and release branches. This may happen after we, including developers and DevOps engineers, become more proficient with Jenkinsfile usage.

We don't have any workflows for Pull Requests and neither are we using shared libraries at the moment, but both of these are on our bucket list. We don't think either of them would impact where we keep our Jenkinsfiles.