Deep Dive · AI-First Engineering
From Skeptic to Convert: How 20 Years of Engineering Leadership Met the Power of GenAI
A veteran technology executive reflects on leaving corporate life, building a product from scratch with AI tools, and why the “human + AI” model isn't hype — it's the new baseline.
By Jon Sinclair · Founder, ClinRS AI · Former VP of Engineering, Advarra ·
The blank IDE problem
For most of the last decade, my job was to scale engineering organizations — hiring, architecting systems, setting technical strategy, managing roadmaps that involved dozens of engineers across multiple product lines. My hands were, by necessity, mostly off the keyboard.
When I left my VP role at Advarra, I sat down in front of a blank IDE for the first time in years. No team to delegate to. No sprint board. Just me, a domain idea, and an infrastructure that needed to exist.
I'll be honest: my initial instinct was skepticism about leaning on GenAI to fill that gap. I've watched enough “next big things” come and go over 20 years to have developed a healthy filter. The hype cycles follow a pattern. The terminology changes. The underlying promise rarely lives up to the launch-day breathlessness.
What happened next changed my mind — not in a slow, incremental way, but in the kind of sharp, irreversible way that only happens when you experience something firsthand.
The old way
Before AI-assisted development was a serious option, building something like the ClinRS platform from scratch would have followed a predictable path. Scope the architecture. Stand up cloud infrastructure. Build CI/CD pipelines. Write boilerplate. Iterate on data models. Design the UI. Test. Refactor. Debug. Deploy.
None of those steps are intellectually hard for a senior engineer — but collectively, sequentially, they consume weeks. Months, if you're building with the care and compliance-consciousness that healthcare software demands. The alternative was to hire a small team, accept the coordination overhead, and surrender a chunk of your runway to salaries before a single customer had seen the product.
That calculus has always disadvantaged the small operator in favor of the well-capitalized. Speed was a function of headcount. Quality was a function of process and review cycles. These were treated as fixed constraints.
The math has changed.
What actually changed: building out ClinRS using Gemini, Cursor, and Claude
When I started building ClinRS, I had some ideas about tools that could help clinical research teams and a blank IDE. No team. No product manager to define requirements. No separate engineering lead to iterate with. No test lead to double check everything.
I had to wear all the hats. And I wore them iteratively.
First, I'd put on the product manager hat. I'd use Cursor and Claude to sketch out a feature — validating the intent, the user flow, the data shape. Not hallucinating; thinking and researching. The AI accelerated the iteration, but I was the one asking the hard questions: What are the top issues with patient retention? What features would be most impactful to these issues? Will a CRA actually use this? Then I would determine if this resonates with what I have seen and what I'm hearing from folks in the industry. Iterate some more. Refine.
Then I'd switch to the engineering lead hat. Same tooling, different lens. Infrastructure decisions. Testing strategy. How does this scale? What breaks when we have real data volume? I'd refactor designs, rethink the schema, add validation layers — all things I'd traditionally workshopped with a team. Here, I was doing them alone, but faster.
This wasn't a linear process. It was more like building in reverse — layering the onion as I went, covering the foundation properly before stacking the next piece. Testing wasn't an afterthought. Security and compliance considerations weren't bolted on at the end. They were baked in from the start because I knew, from 20 years of shipping real systems, where the pitfalls live.
A couple of weeks in, I have a functioning platform. Real infrastructure. Real testing. Real compliance scaffolding. Not a prototype. Not a demo. A thing that actually works.
The website you're on now? That came next, and it came fast. A clean, simple example of what's possible when you can move at speed. But here's the crucial distinction: the website is the visible part. What makes it real — what makes it safe, compliant, and actually production-grade — is the work underneath. The infrastructure, the data models, the testing, the decisions that only an experienced engineer would make.
A small organization can now ship polished, real-looking products in weeks. But you still need someone who's actually built production systems to ensure the whole thing is safe, auditable, and won't collapse under real-world conditions.
That's the shift GenAI enables. Not replacing the senior engineer. Amplifying them.
The old way vs. the AI-augmented way
| Before GenAI | With GenAI (human-piloted) |
|---|---|
| Boilerplate written manually, line by line | Scaffolding generated and validated in minutes |
| Refactoring deferred — too expensive to prioritize | Refactoring continuous — cost is near zero |
| Infrastructure setup measured in days | Infrastructure deployed in hours |
| UI iteration blocked on design-dev handoffs | UI iterated in real time, same session |
| Small teams meant slow delivery or high burn | One senior engineer operates at team scale |
| Compliance review was a late-stage bottleneck | Compliance thinking embedded from day one |
Why this matters more in healthcare and life sciences
The life sciences domain has a particular relationship with speed and precision that makes the GenAI moment especially consequential. Regulatory compliance isn't optional. Data model decisions in healthcare software carry downstream risk that a typical SaaS product doesn't face. The cost of getting it wrong — in clinical trial data management, in patient data handling, in audit trail integrity — isn't just a bad sprint retrospective. It's a FDA finding. It's a SOC 2 gap. It's a deal-breaker with a pharma or biotech client who has spent years building their own compliance posture.
This is precisely why the “human + AI” framing matters so much in this space. GenAI tools are powerful but they are not domain-aware by default. They do not know that a data model decision in a clinical data management system has different implications than the same decision in a CRM. They do not know that a particular API design could create an audit trail gap. They do not know that the shortcut that works in a standard SaaS context fails a 21 CFR Part 11 requirement.
An experienced pilot does know those things. And with GenAI handling the execution velocity, that pilot can now fly faster, at altitude, without sacrificing the judgment that altitude requires.
AI is an exoskeleton for the engineering mind. It doesn't replace the judgment — it amplifies the throughput of the person who has it.
This is how software will be built
I want to be careful here not to slide into the kind of breathless prediction that I was skeptical of at the start. So I'll say it plainly, without the superlatives: the economics of software development have fundamentally shifted, and the shift is not going back.
Let's be clear: this isn't vibe coding. I really appreciated the perspective shared by Simon Willison on a recent Lenny's Podcast, which perfectly captures how I fee which I'll summarize here by stating: "Vibe coding" is a fantastic way for people to build cool-looking things quickly without reviewing the code. Simon uses the term agentic engineering to describe when a professional software engineer uses tools to move quickly while retaining a deep understanding of the code and the overall project. I think this is a hugely important distinction.
The question for every engineering organization in the next three years is not whether to adopt AI-augmented development. That ship has sailed. The question is whether the people leading those organizations have the experience to do it responsibly — to know when the AI output is right, when it's subtly wrong, and when it's confidently generating something that will create a problem six months from now that nobody will be able to trace back to its origin.
Junior engineers using GenAI without guardrails is a risk. Senior engineers using GenAI with judgment is a force multiplier. The distinction matters enormously, and it is a distinction the market has not fully priced in yet.
For smaller healthcare and life sciences companies — the ones that can't afford a 30-person engineering org but need enterprise-grade technical execution — this shift represents an opening. You no longer need the headcount to move at speed. You need the right person, with the right experience, using the right tools.
This is what ClinRS is built to do
ClinRS applies an AI-first engineering philosophy to help healthcare and life sciences companies build with high influence and high impact — without the overhead of a traditional engineering build-out. Whether you need fractional CTO leadership, a technical partner for a specific build, or a strategic sounding board for your engineering roadmap, the model is the same: senior judgment, AI-amplified velocity, domain-grounded precision.
If that sounds like what your organization needs...
Let's talk →