<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Jonathan Blanchet</title><description>Welcome, I&apos;m Jonathan Blanchet,</description><link>https://jonathanblanchet.com/</link><item><title>2024 in Review</title><link>https://jonathanblanchet.com/blog/2024-in-review/</link><guid isPermaLink="true">https://jonathanblanchet.com/blog/2024-in-review/</guid><description>Year 2024 from my perspective</description><pubDate>Sun, 29 Dec 2024 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;As we wrap up 2024, here&apos;s my curated collection of resources I found the most impactful for engineering leaders working at the intersection of web technologies and AI systems.&lt;/p&gt;
&lt;h2&gt;Articles&lt;/h2&gt;
&lt;h3&gt;Software Engineering&lt;/h3&gt;
&lt;p&gt;These 2 articles from Addy Osmani are a must-read for any engineer working with AI.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://addyo.substack.com/p/the-70-problem-hard-truths-about&quot;&gt;The 70 Problem Hard Truths About AI-assisted coding&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://addyo.substack.com/p/future-proofing-your-software-engineering&quot;&gt;Future Proofing Your Software Engineering&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;AI&lt;/h3&gt;
&lt;p&gt;Anthropic is doing an amazing job at popularizing LLM research.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.anthropic.com/research/building-effective-agents&quot;&gt;Building Effective Agents&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.anthropic.com/news/model-context-protocol&quot;&gt;Introducing the Model Context Protocol&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.anthropic.com/research/mapping-mind-language-model&quot;&gt;Mapping the Mind of a Large Language Model&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A few other important articles this year:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://darioamodei.com/machines-of-loving-grace&quot;&gt;Machines of Loving Grace&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://about.fb.com/news/2024/07/open-source-ai-is-the-path-forward/&quot;&gt;Open Source AI is the Path Forward&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Releases&lt;/h3&gt;
&lt;p&gt;Two important releases from my perspective, that I think will have a big impact on the future of AI:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://openai.com/index/introducing-structured-outputs-in-the-api/&quot;&gt;Introducing Structured Outputs in the API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://machinelearning.apple.com/research/introducing-apple-foundation-models&quot;&gt;Introducing Apple’s On-Device and Server Foundation Models&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Videos&lt;/h2&gt;
&lt;p&gt;An amazing video series by Andrej Karpathy on how to code with LLMs. I re-watched it this year and it&apos;s still as relevant as ever.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=VMj-3S1tku0&amp;amp;list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ&quot;&gt;Neural Networks: Zero to Hero&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Books&lt;/h2&gt;
&lt;p&gt;A few books I found the most impactful this year:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.engmanagement.dev/&quot;&gt;Engineering Management for the rest of us&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://fleuret.org/francois/lbdl.html&quot;&gt;The Little Book of Deep Learning&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.oreilly.com/library/view/leading-effective-engineering/9781098148232/&quot;&gt;Leading Effective Engineering Teams&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Tools&lt;/h2&gt;
&lt;p&gt;The most impactful tools I found this year:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.cursor.com/&quot;&gt;Cursor&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://v0.dev/&quot;&gt;v0.dev&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Miscellaneous&lt;/h1&gt;
&lt;p&gt;A few other resources worth checking out:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://bbycroft.net/llm&quot;&gt;Large Language Models: An Introduction&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/stas00/ml-engineering?tab=readme-ov-file&quot;&gt;ml-engineering&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;2024 has been a transformative year for both web technologies and AI.
The convergence of these fields has opened new possibilities for building sophisticated, AI-powered web applications.
As we move into 2025, this convergence will likely continue and we&apos;ll see more and more AI-powered applications in our daily lives.&lt;/p&gt;
&lt;p&gt;Let&apos;s keep learning and building and see what 2025 brings.&lt;/p&gt;
</content:encoded></item><item><title>The missing piece of the AI Assisted Coding puzzle</title><link>https://jonathanblanchet.com/blog/the-missing-piece-of-the-ai-assisted-coding-puzzle/</link><guid isPermaLink="true">https://jonathanblanchet.com/blog/the-missing-piece-of-the-ai-assisted-coding-puzzle/</guid><description>Why today&apos;s version control misses the reasoning behind changes—and why that matters in the era of AI-assisted coding.</description><pubDate>Tue, 19 Aug 2025 00:00:00 GMT</pubDate><content:encoded>&lt;h1&gt;The missing piece of the AI Assisted Coding puzzle&lt;/h1&gt;
&lt;p&gt;This post is a first in a serie adressing versioning in the age of AI assisted coding.&lt;/p&gt;
&lt;h2&gt;The Limits of Today&apos;s Version Control&lt;/h2&gt;
&lt;p&gt;Over more than twenty years in engineering, I&apos;ve seen version control become the backbone of software teams, I&apos;ve used filenaming versioning, SVN, Bazaar, Git… (I managed not to have to use CVS luckily).
As a VP of Engineering today and a former CTO, I&apos;m relying on Git and the platforms built on top of it to track the &lt;em&gt;what&lt;/em&gt; and the &lt;em&gt;when&lt;/em&gt; of code changes with remarkable precision. I know who made a change in our team, when it happened, and what the diff looks like. But the &lt;em&gt;why&lt;/em&gt; behind a change is usually much harder to understand.&lt;/p&gt;
&lt;p&gt;Commit messages attempt to address this gap, but in practice they often read like: &lt;em&gt;“fix bug”&lt;/em&gt;, &lt;em&gt;“update styles”&lt;/em&gt;, or &lt;em&gt;“refactor auth”&lt;/em&gt;. Even with conventions—such as &lt;strong&gt;Conventional Commits&lt;/strong&gt;, or by linking issues from &lt;strong&gt;Linear&lt;/strong&gt; or &lt;strong&gt;GitHub&lt;/strong&gt;—we still mostly track the outcome, not the reasoning process, even for really well written commit messages.&lt;/p&gt;
&lt;p&gt;Looking back, I can think of countless times when teams, including my own, lost time chasing down context because the motivation for a design choice lived only in someone&apos;s memory (that may no longer be part of the team) or a buried conversation. Today, much of our development work is increasingly shaped by &lt;strong&gt;conversations&lt;/strong&gt;: design discussions in issue trackers, debates in pull requests, architectural notes in Notion or Linear, and now, and more importantly, going back-and-forths with &lt;strong&gt;LLMs&lt;/strong&gt; in coding agents.&lt;/p&gt;
&lt;p&gt;AI assisted coding is probably the biggest shift in how developpers works I will witness in my carreer, I&apos;ve never been convinced most of us will be replaced by AI assistants (and I still don&apos;t think so) but it&apos;s definitely re-shaping our work and what I&apos;ll be looking for when recruiting future engineers.&lt;/p&gt;
&lt;p&gt;These conversations we now have (especially with LLMs) represent the decision-making process that leads to the final code, but they are get lost the minute the code is pushed.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Why This Gap Matters More Now&lt;/h2&gt;
&lt;p&gt;Historically, this gap was frustrating but tolerable. Engineers could ask colleagues, StackOverflow, dig through Slack, or search for documentation. But as teams move faster and codebases evolve more rapidly and as AI tools began to contribute directly to code, the lack of reasoning history is becoming a significant blocker.&lt;/p&gt;
&lt;p&gt;LLMs are getting amazingly good at generating code (in the last 2 days, Google released Gemini3 and OpenAI released GPT-5.1-Codex-Max), but without this context they still struggle to align with past decisions.
Imagine onboarding a new team member—human or AI—who can see &lt;em&gt;what&lt;/em&gt; the code does but not &lt;em&gt;why&lt;/em&gt; it was built that way. They risk repeating mistakes, undoing deliberate trade-offs, or introducing inconsistencies. And this is not only a problem for new engineers, but also for LLMs themselves, as they struggle to reason about the code they&apos;ve written in another session.&lt;/p&gt;
&lt;p&gt;To make both LLMs and humans truly effective collaborators, we need a better way to preserve reasoning as part of the development record.&lt;/p&gt;
&lt;h2&gt;Toward Reasoning-Aware Versioning&lt;/h2&gt;
&lt;p&gt;What if version control evolved beyond code diffs to include structured reasoning? Imagine being able to answer questions like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Why did we choose Redis over Postgres for session storage?&lt;/li&gt;
&lt;li&gt;What risks did we consider when enabling this feature flag?&lt;/li&gt;
&lt;li&gt;Which trade-offs guided the design of this API last year?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Some of this context exists today in issues, PRs, or documentation, but none of it is guaranteed to live alongside the code itself. A reasoning-aware version control system would treat these answers as &lt;strong&gt;first-class citizens&lt;/strong&gt;, directly linked to the history of the project.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Shift is already underway&lt;/h2&gt;
&lt;p&gt;We can already see a shift toward richer context in modern workflows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Documentation&lt;/strong&gt; now often lives inside the codebase&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Conventional commits&lt;/strong&gt; aim to make history more searchable.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;PR templates&lt;/strong&gt; ask contributors for motivation, risks, and testing notes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Issue trackers&lt;/strong&gt; encourage structured specs and acceptance criteria.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These are steps in the right direction, but they remain fragmented. What&apos;s missing is a unified way to tie reasoning directly to the same artifacts that Git already manages so effectively.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Looking Ahead&lt;/h2&gt;
&lt;p&gt;Reasoning-aware versioning doesn&apos;t require bloated commits or forcing developers to write essays. It&apos;s about capturing the essence of decisions in a structured, lightweight way, enough for future humans and LLMs to understand &lt;em&gt;why the code is the way it is&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The first step is acknowledging that code alone is no longer the whole story. The next step is figuring out how to bring reasoning into the history we already rely on every day.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;In the next article, we&apos;ll explore what such reasoning records could look like, and how they might integrate with Git without disrupting existing workflows.&lt;/em&gt;&lt;/p&gt;
</content:encoded></item></channel></rss>