<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><title>TalentLMS Tech Blog</title><link>https://blog.talentlms.io/</link><description>Insights and stories from the TalentLMS engineering team. Technical deep-dives, team culture, and lessons learned building learning management software.</description><language>en-US</language><managingEditor>noreply@talentlms.com (TalentLMS Engineering Team)</managingEditor><webMaster>noreply@talentlms.com (TalentLMS Engineering Team)</webMaster><lastBuildDate>Mon, 20 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://blog.talentlms.io/feed.xml" rel="self" type="application/rss+xml"/><item><title>Leveraging Cursor in a Large-Scale Project: My First Experience</title><link>https://blog.talentlms.io/posts/leveraging-cursor-in-a-large-scale-project/</link><pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate><dc:creator>Georgios Theodorakopoulos</dc:creator><guid>https://blog.talentlms.io/posts/leveraging-cursor-in-a-large-scale-project/</guid><description>Onboarding onto a large LMS codebase, I used Cursor not to write features faster, but to build a mental model: where legacy PHP meets newer REST layers, how events propagate, and where permission checks actually live.
This post walks through two real explorations (user impersonation across stacks and a permissions trace), with anonymized prompts, what the tool got right and wrong, and a small playbook you can reuse on your own brownfield project.</description><enclosure url="https://blog.talentlms.io/images/posts/leveraging-cursor-in-a-large-scale-project.png" type="image/png"/><media:content url="https://blog.talentlms.io/images/posts/leveraging-cursor-in-a-large-scale-project.png" medium="image"><media:title type="plain">Abstract painted figure before a branching network of nodes and pathways on a warm background</media:title><media:description type="plain">Abstract painted figure before a branching network of nodes and pathways on a warm background</media:description></media:content><content:encoded><![CDATA[<img src="https://blog.talentlms.io/images/posts/leveraging-cursor-in-a-large-scale-project.png" alt="Abstract painted figure before a branching network of nodes and pathways on a warm background" style="max-width: 100%; height: auto; margin-bottom: 1.5em;" /><p style="margin-bottom: 1.5em; padding: 1em; background-color: #f5f5f5; border-left: 4px solid #0066cc;"><strong>Georgios Theodorakopoulos</strong>, Software Engineer<br/>George’s journey into backend development started at the age of 11, when he first encountered the MySQL dolphin
while trying to launch his very first modded Minecraft server. After many failed setups, …</p><p>When I started at Epignosis, I folded <a href="https://cursor.com/" target="_blank" rel="noopener noreferrer">Cursor</a> into how I actually work, not as a novelty, but as something I reached for when the codebase outpaced my notes. It was the first time I relied on an AI editor for more than autocomplete in a real, high-complexity environment. What changed was not my typing speed; it was how I budget time between discovery and implementation.</p>
<h2 id="the-context">The Context</h2>
<p>Every codebase has a personality. History baked into it. Mine arrived as a large LMS: many modules, wires between them, and legacy paths you do not refactor for sport. Onboarding here is not “read the README and ship.” It is learning where the same word means different things depending on which folder you are in.</p>
<h2 id="first-impressions">First Impressions</h2>
<p>Cursor felt less like an autocomplete box and more like a second pair of eyes that could keep several files in view at once. The part that mattered for this post is not the suggestions themselves. It is that I could ask in plain language instead of chaining <code>grep</code>, IDE search, and hallway questions, and still know I had to verify everything myself. The rest of this article is about that balance.</p>
<h2 id="my-first-task-an-event-in-an-event-driven-flow">My First Task: An Event in an Event-Driven Flow</h2>
<p>My first assignment was to implement a new <strong>EDA</strong> (event-driven architecture) event: a message the system emits when something important happens, with consumers in other services or layers.</p>
<p>The questions piled up fast:</p>
<ul>
<li>Where do handlers for similar events live?</li>
<li>How does the legacy stack publish or consume events compared to the newer REST API?</li>
<li>What breaks if the payload shape is wrong?</li>
</ul>
<p>Here is where I stopped treating all search tools as interchangeable.</p>
<p><strong><code>grep</code> / ripgrep</strong> is unbeatable when you already know the string: a class name, a queue name, a constant. You get a list of hits, you open files, you stitch the story together yourself. It is honest work. It breaks down when the codebase uses five phrases for the same idea, or when the important word is buried in a string builder three calls deep.</p>
<p><strong>IDE search</strong> (scoped to a folder, or “find usages” on a symbol) widens the net. It is still fundamentally <em>text and symbols</em>. You can filter by path and file type, which helps in a monorepo, but you are still guessing which symbol to anchor on. If you pick the wrong entry point, you spend an afternoon in the wrong neighborhood.</p>
<p><strong>Cursor with a narrow mission</strong> was different in kind, not only in degree. I could ask for “where events like this are published <em>and</em> consumed,” across legacy and newer layers, and get a <em>proposed</em> map: files grouped by role, sometimes synonyms I would not have thought to search. That map was often wrong in the details. It was still a faster wrong than a slow blind search because I could correct it with the debugger and a close read, instead of discovering I had been searching the wrong word at hour three.</p>
<p>None of that replaces <code>grep</code>. I still use it every day. The point is to match the tool to the uncertainty: exact string → ripgrep; symbol you trust → IDE; fuzzy concept spanning stacks → assistant first, then verify.</p>
<p>A pattern I was looking for in existing handlers looked conceptually like this (dummy names and schema; only the shape matters):</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">&#34;event&#34;</span>: <span style="color:#e6db74">&#34;domain.course.created&#34;</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">&#34;version&#34;</span>: <span style="color:#ae81ff">1</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">&#34;occurred_at&#34;</span>: <span style="color:#e6db74">&#34;2026-03-30T12:00:00Z&#34;</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">&#34;tenant_id&#34;</span>: <span style="color:#e6db74">&#34;acme&#34;</span>,
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">&#34;payload&#34;</span>: {
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">&#34;course_id&#34;</span>: <span style="color:#e6db74">&#34;crs_01jqxyz&#34;</span>,
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">&#34;actor_user_id&#34;</span>: <span style="color:#e6db74">&#34;usr_01abc&#34;</span>
</span></span><span style="display:flex;"><span>  }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-php" data-lang="php"><span style="display:flex;"><span><span style="color:#f92672">&lt;?</span><span style="color:#a6e22e">php</span>
</span></span><span style="display:flex;"><span><span style="color:#75715e">// Illustrative only, not production code
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">final</span> <span style="color:#66d9ef">class</span> <span style="color:#a6e22e">CourseCreatedPublisher</span>
</span></span><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">public</span> <span style="color:#66d9ef">function</span> <span style="color:#a6e22e">publish</span>(<span style="color:#a6e22e">CourseCreated</span> $event, <span style="color:#a6e22e">EventBus</span> $bus)<span style="color:#f92672">:</span> <span style="color:#a6e22e">void</span>
</span></span><span style="display:flex;"><span>    {
</span></span><span style="display:flex;"><span>        $envelope <span style="color:#f92672">=</span> $this<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">envelopeFactory</span><span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">fromDomainEvent</span>($event);
</span></span><span style="display:flex;"><span>        $bus<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">dispatch</span>(<span style="color:#e6db74">&#39;domain.course.created&#39;</span>, $envelope);
</span></span><span style="display:flex;"><span>    }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p>Technically, what I cared about next was not “emit an event” in the happy path. It was <strong>contract compatibility</strong>: <code>version</code> in the envelope, whether consumers are <strong>at-least-once</strong> and therefore need <strong>idempotency keys</strong>, and what happens when one subscriber upgrades before another. I will say it plainly: <strong>event schemas are public APIs</strong>. Treat them as an afterthought and you will ship a breaking change that only shows up under load or in a downstream queue. Cursor helped me find where similar envelopes were assembled and validated; it did not write the idempotency strategy for me.</p>
<p>There was no single obvious entry point. I spent half a day in internal docs and file trees, useful but fragmented. That is when I switched from “read everything” to “tell the assistant what success looks like, then tear the answer apart,” with the same verification I would use after any senior’s sketch on a whiteboard.</p>
<h2 id="from-vague-prompts-to-missions">From Vague Prompts to Missions</h2>
<p>I started with a broad prompt:</p>
<blockquote>
<p><em>Hi, what can you tell me about this project?</em></p>
</blockquote>
<p>The answer was a reasonable map: major areas, how pieces relate. Fine for day one. It also showed me a rule I still use: <strong>vague questions get survey answers.</strong> They do not replace a goal.</p>
<p>So I reframed. Instead of “tell me about X,” I gave a mission with boundaries, stacks, folders, what “done” looks like. The next prompt looked like this:</p>
<blockquote>
<p><em>Find every place user impersonation is implemented or checked. Include legacy PHP modules and the newer REST API code. List files and how they connect.</em></p>
</blockquote>
<p>That shift, from open chat to scoped reconnaissance, is what made the tool feel earned instead of magical.</p>
<h2 id="case-study-1-impersonation-across-legacy-and-rest">Case Study 1: Impersonation Across Legacy and REST</h2>
<p>I kept the mission concrete: both <strong>legacy</strong> and <strong>REST API</strong> folders, and the <strong>interaction points</strong> between them, not a single happy path.</p>
<p>What came back was not a wall of prose. It read more like a <strong>reconnaissance brief</strong>: a short list of areas, then files grouped by role. In my own notes I distilled it into something like the structure below (names and paths are illustrative, not a copy-paste from our repo):</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-text" data-lang="text"><span style="display:flex;"><span>legacy/
</span></span><span style="display:flex;"><span>  └── User/
</span></span><span style="display:flex;"><span>      └── Impersonation*.php          # session / context switch
</span></span><span style="display:flex;"><span>rest-api/
</span></span><span style="display:flex;"><span>  └── src/
</span></span><span style="display:flex;"><span>      └── Identity/
</span></span><span style="display:flex;"><span>          └── ImpersonationGuard.php  # token + permission gate
</span></span></code></pre></div><p>Alongside that, it pointed to middleware or filters that attach identity to the request, the same layers I would have had to discover by stepping through with a debugger.</p>
<p>On the REST side, a simplified version of what I expected to find (and later verified line-by-line) looked like:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-php" data-lang="php"><span style="display:flex;"><span><span style="color:#f92672">&lt;?</span><span style="color:#a6e22e">php</span>
</span></span><span style="display:flex;"><span><span style="color:#75715e">// Illustrative middleware, real code has more guards
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">namespace</span> <span style="color:#a6e22e">App\Http\Middleware</span>;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">final</span> <span style="color:#66d9ef">class</span> <span style="color:#a6e22e">AttachImpersonationContext</span>
</span></span><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">public</span> <span style="color:#66d9ef">function</span> <span style="color:#a6e22e">handle</span>(<span style="color:#a6e22e">Request</span> $request, <span style="color:#a6e22e">Closure</span> $next)<span style="color:#f92672">:</span> <span style="color:#a6e22e">Response</span>
</span></span><span style="display:flex;"><span>    {
</span></span><span style="display:flex;"><span>        $token <span style="color:#f92672">=</span> $request<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">attributes</span><span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">get</span>(<span style="color:#e6db74">&#39;session_token&#39;</span>);
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">if</span> ($token<span style="color:#f92672">?-&gt;</span><span style="color:#a6e22e">isImpersonating</span>()) {
</span></span><span style="display:flex;"><span>            $request<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">attributes</span><span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">set</span>(<span style="color:#e6db74">&#39;effective_user_id&#39;</span>, $token<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">subjectId</span>());
</span></span><span style="display:flex;"><span>        }
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> $next($request);
</span></span><span style="display:flex;"><span>    }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p>And a fragment of legacy-style session switching might resemble:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-php" data-lang="php"><span style="display:flex;"><span><span style="color:#f92672">&lt;?</span><span style="color:#a6e22e">php</span>
</span></span><span style="display:flex;"><span><span style="color:#75715e">// Illustrative legacy helper
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">function</span> <span style="color:#a6e22e">tlms_begin_impersonation</span>(<span style="color:#a6e22e">int</span> $adminId, <span style="color:#a6e22e">int</span> $targetUserId)<span style="color:#f92672">:</span> <span style="color:#a6e22e">void</span>
</span></span><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span>    $_SESSION[<span style="color:#e6db74">&#39;impersonation&#39;</span>] <span style="color:#f92672">=</span> [
</span></span><span style="display:flex;"><span>        <span style="color:#e6db74">&#39;admin_id&#39;</span> <span style="color:#f92672">=&gt;</span> $adminId,
</span></span><span style="display:flex;"><span>        <span style="color:#e6db74">&#39;target_id&#39;</span> <span style="color:#f92672">=&gt;</span> $targetUserId,
</span></span><span style="display:flex;"><span>        <span style="color:#e6db74">&#39;started_at&#39;</span> <span style="color:#f92672">=&gt;</span> <span style="color:#a6e22e">time</span>(),
</span></span><span style="display:flex;"><span>    ];
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p>Seeing both shapes in the same exploration session made it obvious where parity checks belong.</p>
<h3 id="how-i-verified-it">How I verified it</h3>
<p>Cursor gave me a <strong>hypothesis graph</strong>. I still:</p>
<ol>
<li>Opened each suggested file and read the real control flow.</li>
<li>Set a breakpoint on the REST path and walked the same route in XDebug.</li>
<li>Compared: does legacy enforce the same invariants as the new API, or only some of them?</li>
</ol>
<p>In one case the overview was slightly <strong>over-merged</strong>: two similarly named helpers were described as one flow when they served different entry points. That is not a reason to abandon the tool; it is a reason to treat its map as <strong>R&amp;D output</strong>, not a spec.</p>
<p>Roughly, that first targeted round took on the order of <strong>minutes</strong> to produce a navigable list; doing the same with search keywords alone would have been <strong>hours</strong> of false positives and missed synonyms (“impersonate” vs “act as” vs “switch user”).</p>
<p>From a security angle, impersonation is where I am least willing to trust generated code. I want <strong>explicit invariants</strong>: who initiated the switch, whether it is time-bounded, whether audit logs record both identities, and whether APIs reject confused-deputy patterns. My view is that an assistant is fine for <strong>locating</strong> those invariants across stacks; it is not the authority on whether your threat model is complete. If the map says “middleware X,” I still read X and ask whether that is sufficient for every transport (browser session, API token, background job). That skepticism is not cynicism about the tool; it is how I sleep at night.</p>
<h2 id="case-study-2-when-permissions-means-five-different-things">Case Study 2: When “Permissions” Means Five Different Things</h2>
<p>While tracing features, I kept hitting the word <strong>Permissions</strong>. In a smaller codebase that might be one module. Here it could mean:</p>
<table>
  <thead>
      <tr>
          <th>Layer</th>
          <th>What “permission” often refers to</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>HTTP API</td>
          <td>Route or scope checks on specific endpoints</td>
      </tr>
      <tr>
          <td>Domain / service</td>
          <td>Business rules (“may this user perform this action on this resource”)</td>
      </tr>
      <tr>
          <td>RBAC</td>
          <td>Roles and role-to-capability mapping</td>
      </tr>
      <tr>
          <td>Product / feature flags</td>
          <td>Gates that are not strictly authorization</td>
      </tr>
      <tr>
          <td>Infra</td>
          <td>Keys, environment, deployment not user auth</td>
      </tr>
  </tbody>
</table>
<p>After <strong>company onboarding</strong> (product and engineering orientation, not a public certification name), I could name concrete actions “create user,” “update user,” “view user” but I still did not know <strong>where</strong> those checks were enforced relative to “create course,” which spans UI, legacy API, and newer flows.</p>
<p>So I asked:</p>
<blockquote>
<p><em>Where is permission enforced so a user cannot create a course? Distinguish UI-only checks from API enforcement, and legacy vs newer paths.</em></p>
</blockquote>
<h3 id="what-made-this-answer-useful">What made this answer useful</h3>
<p>The valuable part was not “here is a file.” It was <strong>layering</strong>: legacy UI affordances, legacy API handlers, and REST handlers, with an explicit call-out where behavior could diverge, for example UI hiding a button while an API still allows the operation if called directly. That is the class of bug you hunt when you care about consistency, not just about compiling.</p>
<p>Dummy examples of what “three layers” can look like in practice:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-typescript" data-lang="typescript"><span style="display:flex;"><span><span style="color:#75715e">// Client may hide UI without enforcing server-side (illustrative)
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">const</span> <span style="color:#a6e22e">canCreateCourse</span> <span style="color:#f92672">=</span> <span style="color:#a6e22e">usePermission</span>(<span style="color:#e6db74">&#39;courses.create&#39;</span>);
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">return</span> <span style="color:#a6e22e">canCreateCourse</span> <span style="color:#f92672">?</span> &lt;<span style="color:#f92672">CreateCourseButton</span> /&gt; <span style="color:#f92672">:</span> <span style="color:#66d9ef">null</span>;
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-php" data-lang="php"><span style="display:flex;"><span><span style="color:#f92672">&lt;?</span><span style="color:#a6e22e">php</span>
</span></span><span style="display:flex;"><span><span style="color:#75715e">// Legacy API handler illustrative
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">public</span> <span style="color:#66d9ef">function</span> <span style="color:#a6e22e">postCreateCourse</span>(<span style="color:#a6e22e">CreateCourseRequest</span> $req)<span style="color:#f92672">:</span> <span style="color:#a6e22e">JsonResponse</span>
</span></span><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">if</span> (<span style="color:#f92672">!</span>$this<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">acl</span><span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">userMay</span>($req<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">user</span>(), <span style="color:#e6db74">&#39;courses.create&#39;</span>)) {
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> <span style="color:#a6e22e">response</span>()<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">json</span>([<span style="color:#e6db74">&#39;error&#39;</span> <span style="color:#f92672">=&gt;</span> <span style="color:#e6db74">&#39;forbidden&#39;</span>], <span style="color:#ae81ff">403</span>);
</span></span><span style="display:flex;"><span>    }
</span></span><span style="display:flex;"><span>    <span style="color:#75715e">// ...
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>}
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-php" data-lang="php"><span style="display:flex;"><span><span style="color:#f92672">&lt;?</span><span style="color:#a6e22e">php</span>
</span></span><span style="display:flex;"><span><span style="color:#75715e">// REST policy illustrative
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">final</span> <span style="color:#66d9ef">class</span> <span style="color:#a6e22e">CreateCoursePolicy</span>
</span></span><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">public</span> <span style="color:#66d9ef">function</span> <span style="color:#a6e22e">create</span>(<span style="color:#a6e22e">User</span> $actor, <span style="color:#a6e22e">Tenant</span> $tenant)<span style="color:#f92672">:</span> <span style="color:#a6e22e">bool</span>
</span></span><span style="display:flex;"><span>    {
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> $this<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">capabilities</span><span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">granted</span>($actor, $tenant, <span style="color:#e6db74">&#39;courses.create&#39;</span>);
</span></span><span style="display:flex;"><span>    }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p>The point of the exercise was not to memorize snippets like these. It was to know <strong>which of them actually runs</strong> for the client I cared about.</p>
<p>I am opinionated about <strong>defense in depth</strong>: the UI should reflect policy, but the server must enforce it. If those two disagree, I consider it a defect unless there is a documented, intentional reason (for example, progressive enhancement with a degraded mode, which still needs a story for direct API access). In a brownfield LMS, “permission” often leaks into feature flags and product experiments too. I do not think those should be conflated with RBAC in code, even when marketing uses one word for all of them. Naming and module boundaries matter because the next engineer will grep for <code>permission</code> and land in the wrong layer.</p>
<h2 id="trade-offs-cursor-vs-other-tools">Trade-Offs: Cursor vs Other Tools</h2>
<p>None of these replace the others; they have different failure modes:</p>
<table>
  <thead>
      <tr>
          <th>Approach</th>
          <th>Strength</th>
          <th>Weakness</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><code>grep</code> / ripgrep</td>
          <td>Exact symbol search, fast</td>
          <td>Synonyms and indirect calls; no narrative</td>
      </tr>
      <tr>
          <td>IDE “Find usages”</td>
          <td>Refinement</td>
          <td>Noise in huge codebases; misses dynamic dispatch</td>
      </tr>
      <tr>
          <td>Debugger</td>
          <td>Ground truth for one execution</td>
          <td>Slow to cover all branches</td>
      </tr>
      <tr>
          <td>Cursor (directed prompts)</td>
          <td>Cross-file story and synonyms</td>
          <td>Can over-merge or hallucinate edge paths</td>
      </tr>
  </tbody>
</table>
<p>The workflow that worked for me: <strong>Cursor for a structured first pass, then the debugger and raw reading for proof.</strong> <a href="https://docs.cursor.com/" target="_blank" rel="noopener noreferrer">Cursor’s own documentation</a> stresses context and rules; pairing that with repo-specific rules files (when your team maintains them) improves consistency.</p>
<h3 id="opinions-i-am-willing-to-defend">Opinions I am willing to defend</h3>
<ul>
<li><strong>Navigation beats codegen for onboarding.</strong> The highest leverage use of an AI editor in a large repo, for me, has been <em>finding and relating</em> code, not letting it draft whole features on day three. I would rather own fewer lines I understand than ship many I do not.</li>
<li><strong>Context windows are a budget, not a miracle.</strong> Long chats drift. I restart threads when the task changes, and I pin concrete paths or symbols when I know them. Treating the assistant like a stateless search plus narrative layer keeps quality higher than pretending it remembers last week’s decision.</li>
<li><strong>When <code>grep</code> wins:</strong> exact symbol renames, generated migrations, or a single known string across the repo. When Cursor wins: “this concept has five names and three frameworks.”</li>
<li><strong>Telemetry still beats prose.</strong> If logs or traces show which component handled a request, that evidence outranks a confident paragraph from any model. I use AI to suggest <em>where</em> to add a log or breakpoint, not to replace runtime truth.</li>
</ul>
<h3 id="one-technical-detail-request-identity">One technical detail: request identity</h3>
<p>On the REST side, identity often flows through attributes populated by middleware (see the dummy <code>AttachImpersonationContext</code> earlier). That matters because <strong>authorization policies</strong> usually read the same effective user the domain services see. If those two disagree, you get bugs that look like “permissions are random.” When I explore, I explicitly ask how <code>Request</code> attributes, session state, and policy classes align. A boring question, but it prevents spectacular production incidents.</p>
<p>When I want a reproducible check after an exploratory chat, I sometimes leave a scratch assertion in a test or script (nothing that ships), just a guardrail for my own understanding:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-php" data-lang="php"><span style="display:flex;"><span><span style="color:#f92672">&lt;?</span><span style="color:#a6e22e">php</span>
</span></span><span style="display:flex;"><span><span style="color:#75715e">// Spike / scratch: throw away after you trust the real integration
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>
</span></span><span style="display:flex;"><span><span style="color:#66d9ef">public</span> <span style="color:#66d9ef">function</span> <span style="color:#a6e22e">test_create_course_requires_capability</span>()<span style="color:#f92672">:</span> <span style="color:#a6e22e">void</span>
</span></span><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span>    $this<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">actingAsUserWithout</span>(<span style="color:#e6db74">&#39;courses.create&#39;</span>);
</span></span><span style="display:flex;"><span>    $response <span style="color:#f92672">=</span> $this<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">postJson</span>(<span style="color:#e6db74">&#39;/api/v2/courses&#39;</span>, [<span style="color:#e6db74">&#39;title&#39;</span> <span style="color:#f92672">=&gt;</span> <span style="color:#e6db74">&#39;T&#39;</span>]);
</span></span><span style="display:flex;"><span>    $response<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">assertStatus</span>(<span style="color:#ae81ff">403</span>);
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><h2 id="a-small-playbook-you-can-reuse-tomorrow">A Small Playbook You Can Reuse Tomorrow</h2>
<p>If you take one thing from this post, make it operational:</p>
<ol>
<li><strong>Name the subsystem</strong> (e.g. “impersonation,” “course creation permissions”).</li>
<li><strong>Bound the search</strong> (legacy vs new, UI vs API).</li>
<li><strong>Ask for layers and divergence points</strong>, not just file paths.</li>
<li><strong>Verify</strong> with reads and, when it matters, a debugger.</li>
<li><strong>Log wrong merges</strong> when the model conflates two flows. Those notes train your next prompt.</li>
</ol>
<h2 id="closing">Closing</h2>
<p>The hard part of a large system is rarely typing the implementation. It is knowing which layer owns a rule, whether two stacks agree, and what breaks downstream. Cursor did not hand me that understanding. It narrowed where to look and sharpened the questions I asked in code review and in my own head.</p>
<p>I use it as a <strong>reconnaissance</strong> tool: point, verify, then own the change. That shortened the distance between “new on the team” and “comfortable changing this.” That is the bar I care about for the next task too.</p>
<p>If one belief ties this post together: <strong>in a brownfield codebase, competence shows up as impact awareness, not as commit velocity.</strong> Tools that pull impact into view earlier are worth learning. The rest is packaging.</p>
]]></content:encoded></item><item><title>Upgrading from React 18 to React 19: What We Learned and How Claude Code Saved What Was Left of My Sanity</title><link>https://blog.talentlms.io/posts/upgrading-from-react-18-to-react-19/</link><pubDate>Mon, 30 Mar 2026 00:00:00 +0000</pubDate><dc:creator>Alexander Antoniades</dc:creator><guid>https://blog.talentlms.io/posts/upgrading-from-react-18-to-react-19/</guid><description>You bump a version, run install, and the project catches fire. That's a React upgrade.
Half the bugs you find were already there — the rest is types, tests, and Claude Code keeping the investigation from eating you alive.</description><enclosure url="https://blog.talentlms.io/images/posts/upgrading-from-react-18-to-react-19.jpg" type="image/png"/><media:content url="https://blog.talentlms.io/images/posts/upgrading-from-react-18-to-react-19.jpg" medium="image"><media:title type="plain">Laptop screen displaying the React logo in a dim workspace, suggesting a focused frontend upgrade.</media:title><media:description type="plain">Laptop screen displaying the React logo in a dim workspace, suggesting a focused frontend upgrade.</media:description></media:content><content:encoded><![CDATA[<img src="https://blog.talentlms.io/images/posts/upgrading-from-react-18-to-react-19.jpg" alt="Laptop screen displaying the React logo in a dim workspace, suggesting a focused frontend upgrade." style="max-width: 100%; height: auto; margin-bottom: 1.5em;" /><p style="margin-bottom: 1.5em; padding: 1em; background-color: #f5f5f5; border-left: 4px solid #0066cc;"><strong>Alexander Antoniades</strong>, Frontend Chapter Lead<br/>Alexander Antoniades is the Frontend Chapter Lead at TalentLMS. He spends his days obsessing over web performance and front-end architecture — the kind of work where shaving off 50 milliseconds feels …</p><p>I want you to picture something. It&rsquo;s a Tuesday. You&rsquo;re at your desk. Coffee&rsquo;s still warm. You open your terminal, bump a version number, hit install, and watch your entire project catch fire.</p>
<p>That&rsquo;s a React upgrade.</p>
<p>It always <em>sounds</em> straightforward — a quick version bump, <code>npm install</code>, maybe mute a few warnings. But if you&rsquo;ve actually done it, you know the truth. The version number is the easy part. Everything that comes after is the part nobody warns you about.</p>
<p>A few months back, our Frontend chapter pulled off a full upgrade from React 17 to React 18 during a hackathon at our headquarters in Athens. The whole team, working toward a common goal. It was challenging, messy, and — I&rsquo;ll admit — actually fun.</p>
<p>React 18 was working fine after that. Life was good. But the ecosystem, it never sits still, does it? Libraries started hinting that React 19 was the &ldquo;expected&rdquo; baseline. TypeScript types were drifting in that direction. Deprecation warnings started showing up like passive-aggressive Post-it notes on your monitor.</p>
<p>Nobody wants to be the team still running the old version when the next wave hits. That&rsquo;s how you end up three versions behind, staring down a rewrite.</p>
<p>So this time, I decided to do it solo. Just me, a codebase, and what I assumed would be a quiet afternoon.</p>
<p>I was wrong.</p>
<hr>
<h2 id="the-challenges">The Challenges</h2>
<h3 id="dependency-conflicts">Dependency Conflicts</h3>
<p>Here&rsquo;s the thing about upgrading React. You&rsquo;re not upgrading React. You&rsquo;re upgrading <em>everything that ever touched React</em>.</p>
<p>The moment I bumped to React 19, the whole project started screaming. React Router needed a version bump. Nearly all unit tests failed. Testing libraries demanded newer React DOM APIs. ESLint plugins lost their minds. I thought this would be easier than the 17-to-18 jump. Instead, I was playing whack-a-mole with a cascade of small fires, each one revealing two more underneath.</p>
<p>I thought about quitting. Just reverting the branch, closing the laptop, going for a walk. Pretending none of it happened.</p>
<p>I didn&rsquo;t. But I thought about it.</p>
<h3 id="the-bugs-that-were-always-there">The Bugs That Were Always There</h3>
<p>Here&rsquo;s a dirty secret about React upgrades: half the bugs you &ldquo;discover&rdquo; aren&rsquo;t new. They were always there. You just never had a reason to look.</p>
<p>Take this example. On React 18, this effect worked without complaints — <code>reset</code> was replacing the entire form state with just <code>{ event: { type } }</code>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-ts" data-lang="ts"><span style="display:flex;"><span><span style="color:#a6e22e">useEffect</span>(() <span style="color:#f92672">=&gt;</span> {
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">const</span> <span style="color:#a6e22e">preservedEventType</span> <span style="color:#f92672">=</span> <span style="color:#a6e22e">formValuesWatch</span>.<span style="color:#a6e22e">event</span><span style="color:#f92672">?</span>.<span style="color:#66d9ef">type</span>;
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">reset</span>({ <span style="color:#a6e22e">event</span><span style="color:#f92672">:</span> { <span style="color:#66d9ef">type</span><span style="color:#f92672">:</span> <span style="color:#a6e22e">preservedEventType</span> } });
</span></span><span style="display:flex;"><span>}, [<span style="color:#a6e22e">eventTypeWatch</span>]);
</span></span></code></pre></div><p>It worked. It shipped. Nobody questioned it. But it was broken the whole time. Every time this effect ran, it was throwing away the rest of your form values and replacing everything with a minimal object. React 18&rsquo;s StrictMode was already double-firing effects in development — which meant this bug was being triggered twice per render cycle. But TypeScript didn&rsquo;t complain about the shape mismatch, and the form <em>seemed</em> to work, so nobody noticed. Or maybe we just looked the other way.</p>
<p>Then we upgraded to React 19 and updated <code>@types/react</code>. The stricter types lit up like a Christmas tree. Suddenly TypeScript cared about what we were passing to <code>reset()</code>. And once we started looking at the flagged code, the real bug became obvious — this effect had been silently corrupting form state all along. The upgrade didn&rsquo;t break it. It just ripped off the bandage.</p>
<p>The fix was to stop being lazy about it:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-ts" data-lang="ts"><span style="display:flex;"><span><span style="color:#a6e22e">useEffect</span>(() <span style="color:#f92672">=&gt;</span> {
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">const</span> <span style="color:#a6e22e">preservedEventType</span> <span style="color:#f92672">=</span> <span style="color:#a6e22e">formValuesWatch</span>.<span style="color:#a6e22e">event</span><span style="color:#f92672">?</span>.<span style="color:#66d9ef">type</span>;
</span></span><span style="display:flex;"><span>  <span style="color:#66d9ef">const</span> <span style="color:#a6e22e">currentValues</span> <span style="color:#f92672">=</span> <span style="color:#a6e22e">getValues</span>();
</span></span><span style="display:flex;"><span>  <span style="color:#a6e22e">reset</span>({
</span></span><span style="display:flex;"><span>    ...<span style="color:#a6e22e">currentValues</span>,
</span></span><span style="display:flex;"><span>    <span style="color:#a6e22e">event</span><span style="color:#f92672">:</span> { ...<span style="color:#a6e22e">currentValues</span>.<span style="color:#a6e22e">event</span>, <span style="color:#66d9ef">type</span><span style="color:#f92672">:</span> <span style="color:#a6e22e">preservedEventType</span> },
</span></span><span style="display:flex;"><span>  });
</span></span><span style="display:flex;"><span>}, [<span style="color:#a6e22e">eventTypeWatch</span>]);
</span></span></code></pre></div><p>Grab the current values. Spread them. Override only what you need. Don&rsquo;t blow away things you didn&rsquo;t mean to touch. Simple, once you see it. But you have to see it first — and sometimes it takes an upgrade to force you to look.</p>
<h3 id="typescript-got-stricter-as-well">TypeScript Got Stricter As Well</h3>
<p>This is the part where it gets fun. And by fun, I mean the kind of fun where you open your IDE and every single file has red squiggles. Everywhere. Like your codebase developed a rash overnight.</p>
<p>React 19 shipped with completely overhauled TypeScript types, and the changes were not cosmetic. <code>ReactChild</code>, <code>ReactText</code>, a handful of other utility types you probably used without thinking — gone. Just removed. If your codebase referenced them, even indirectly through a shared component library, you got a wall of errors the moment you updated <code>@types/react</code>.</p>
<p>Then there was the <code>ref</code> situation. React 19 finally lets you pass <code>ref</code> as a regular prop. No more <code>forwardRef</code> wrapping. That&rsquo;s the good news. The bad news is that on the TypeScript side, ref callback return types got stricter. If you had ref callbacks with implicit returns — an arrow function that accidentally returned something — TypeScript now rejected them. Across a large codebase, these &ldquo;small&rdquo; changes multiplied into hundreds of type errors.</p>
<p><code>useRef</code> got pickier too. Call it without an argument? TypeScript error. You need to explicitly pass <code>null</code> or <code>undefined</code> now. <code>RefObject</code> no longer includes <code>null</code> by default, so any code that assumed <code>ref.current</code> could be <code>null</code> needed touching.</p>
<p>The React team published a codemod (<code>types-react-codemod</code>) that handled some of this. The keyword being <em>some</em>. The more custom your type patterns, the more you were on your own.</p>
<h3 id="the-forwardref-farewell">The <code>forwardRef</code> Farewell</h3>
<p>This one deserves its own section because it touched <em>everything</em>.</p>
<p>With React 19 treating <code>ref</code> as a regular prop, <code>forwardRef</code> becomes unnecessary. In theory, that&rsquo;s a beautiful simplification. In practice, we had dozens of components wrapped in <code>forwardRef</code>, each with its own generic type signatures, its own quirks, its own reasons for existing.</p>
<p>Removing <code>forwardRef</code> meant restructuring every single one. Pull <code>ref</code> into the props interface. Update the type signatures. Make sure the parent components that pass refs still compile. It was mechanical, repetitive, mind-numbing work — the kind where you get sloppy on component number 47 because your eyes have gone glassy and everything looks the same.</p>
<p>The kind of work that breaks you if you let it.</p>
<hr>
<h2 id="enter-claude-code">Enter Claude Code</h2>
<p>A few hours in. Buried in type errors. Tests failing everywhere. That quiet voice in your head saying <em>this solo upgrade was a terrible idea</em>.</p>
<p>That&rsquo;s when I decided to throw Claude Code at it.</p>
<p>I&rsquo;d used it before for smaller things — generating boilerplate, explaining someone else&rsquo;s code, the usual. But I&rsquo;d never thrown it at a full-scale migration. A real mess. The kind of mess where you don&rsquo;t even know where to start.</p>
<p>Turns out, that&rsquo;s exactly where it comes alive.</p>
<h3 id="bulk-code-migrations">Bulk Code Migrations</h3>
<p>The <code>forwardRef</code> removal was the first real test. Dozens of components. All needing the same structural surgery: unwrap <code>forwardRef</code>, move <code>ref</code> into the props type, clean up the generics. By hand, this is the kind of task that takes hours and introduces bugs at a steady, reliable rate.</p>
<p>With Claude Code, I described the pattern once. It understood the structure. And then it just&hellip; did it. Across the codebase.</p>
<p>What got me was how it adapted. It didn&rsquo;t do a dumb find-and-replace. Components with complex generic types got different treatment than simple ones. It caught cases where <code>ref</code> was being used in non-standard ways and flagged them for manual review instead of blindly transforming them. It was thinking about the code, not just processing it.</p>
<p>Same approach worked for the deprecated type references. Every instance of <code>ReactChild</code> that needed to become <code>React.ReactElement | number | string</code>? Handled. File by file. Respecting existing imports. Not introducing noise.</p>
<h3 id="debugging-type-errors">Debugging Type Errors</h3>
<p>This is where Claude Code really earned its place. After the bulk migrations, I still had a backlog of type errors that weren&rsquo;t mechanical. Nuanced things — <code>useReducer</code> inference changes, ref callback return types, third-party library type mismatches.</p>
<p>Normally, each of these is a 20-minute rabbit hole. Google the error. Find a GitHub issue from eight months ago. Read through 40 comments to find the one that actually has the answer. Try three different approaches. Repeat.</p>
<p>Instead, I could feed Claude Code the error and the surrounding code and get back a clear explanation of <em>why</em> it broke and <em>how</em> to fix it. Not just &ldquo;change this line.&rdquo; It understood the difference between &ldquo;this broke because React 19 changed the type definition&rdquo; and &ldquo;this was always wrong, React 18 just let you get away with it.&rdquo;</p>
<p>One that sticks out: we had a custom hook that returned a ref callback. After the upgrade, TypeScript hated the return type. Claude Code immediately identified the issue — React 19 introduced ref cleanup functions, so you can now return a cleanup function from a ref callback, similar to <code>useEffect</code>. The implicit return in our arrow function was being interpreted as a cleanup function. The fix was a one-liner. Add explicit curly braces. Done.</p>
<p>Understanding <em>why</em> that was the fix? That would have cost me half an hour on my own. Maybe more.</p>
<h3 id="fixing-broken-tests">Fixing Broken Tests</h3>
<p>Nearly all our unit tests were failing. This was probably the most daunting part, because test failures are a special kind of demoralizing — you can&rsquo;t tell at a glance whether it&rsquo;s a real problem or just noise from the upgrade.</p>
<p>The failures fell into categories. Testing library API changes. StrictMode behavior affecting test output. Genuine regressions from our refactoring. The tricky part is figuring out <em>which is which</em>.</p>
<p>Claude Code helped me triage at speed. Feed it a failing test with its error output. It would tell me: this is a testing library compatibility issue (update <code>@testing-library/react</code>, adjust the assertions), this is a StrictMode double-render thing (your test was relying on render count), this is an actual bug you introduced (fix it).</p>
<p>For the testing library issues, it helped me batch-update patterns across the entire test suite. Replacing deprecated query methods. Updating async utilities. Adjusting mock setups that assumed React 18 rendering behavior. The boring, necessary work that eats your day if you do it by hand.</p>
<hr>
<h2 id="what-i-learned">What I Learned</h2>
<p>A few things I&rsquo;m taking away from this:</p>
<p><strong>Upgrade to 18.3 first.</strong> If you haven&rsquo;t already, install <code>react@18.3</code> before you even think about 19. It&rsquo;s identical to 18.2 but adds deprecation warnings for everything that will break. Think of it as a damage report before the actual damage.</p>
<p><strong>Don&rsquo;t underestimate the type changes.</strong> The runtime breaking changes in React 19? Manageable. The TypeScript type changes? That&rsquo;s where the real volume lives, especially in a large codebase. Run the official <code>types-react-codemod</code> early — it won&rsquo;t catch everything, but it&rsquo;ll clear the surface-level wreckage. And if you&rsquo;re on a large project and can&rsquo;t afford to fix every type error in one go, look into <a href="https://phenomnomnominal.github.io/betterer/" target="_blank" rel="noopener noreferrer">Betterer</a>. It lets you snapshot your current type errors and enforce that things only get <em>better</em> over time — no new errors allowed, but existing ones can be chipped away incrementally across PRs. Merge the upgrade, unblock your team, and clean up at your own pace. No shame in that.</p>
<p><strong>The upgrade didn&rsquo;t break your code. It exposed what was already broken.</strong> Half the bugs we found during the migration existed in React 18 too — we just never had a reason to look. Stricter types forced us to revisit code we hadn&rsquo;t touched in months, and that&rsquo;s where the real issues were hiding. Don&rsquo;t blame the upgrade. Thank it.</p>
<p><strong>AI tooling has crossed a line for migrations.</strong> I&rsquo;ll be honest — I was skeptical. Migrations aren&rsquo;t about writing new code. They&rsquo;re about understanding why existing code breaks under new rules and applying the right fix without creating new problems. That felt too nuanced for an AI tool. I was wrong. Claude Code didn&rsquo;t replace my judgment, but it cut the mechanical work and research time dramatically. The ratio shifted from &ldquo;90% investigation, 10% fixing&rdquo; to something much more sane.</p>
<hr>
<h2 id="final-thoughts">Final Thoughts</h2>
<p>Would I do the solo upgrade again? Yes. But only because Claude Code made it possible. Without it, this would have been days of documentation, GitHub issues, and Stack Overflow threads. The kind of work that makes you question your career choices.</p>
<p>Instead, it was one intense session where I could focus on the decisions that actually mattered and hand off the repetitive, soul-crushing investigation work to something that didn&rsquo;t mind doing it.</p>
<p>The React ecosystem keeps moving. These upgrades aren&rsquo;t optional forever. You put them off, they compound. The gap gets wider. The next upgrade gets worse.</p>
<p>The good news? The tooling for managing migrations — from the React team&rsquo;s codemods and incremental releases, to AI assistants that actually understand what they&rsquo;re looking at — is getting better, fast.</p>
<p>If you&rsquo;ve been sitting on the React 19 upgrade, my advice is simple: stop waiting. It&rsquo;s not going to get easier on its own. Grab the best tools you have and start pulling the thread.</p>
<p>Your future self will thank you. Or at least stop resenting you.</p>
]]></content:encoded></item><item><title>What Is Architecture?</title><link>https://blog.talentlms.io/posts/what-is-architecture/</link><pubDate>Mon, 02 Mar 2026 00:00:00 +0000</pubDate><dc:creator>Yannis Rizos</dc:creator><guid>https://blog.talentlms.io/posts/what-is-architecture/</guid><description>A young engineer asked me a deceptively simple question. Years in the problem, and the answer came out as a fumble.
Every short definition slips. Architecture is not the diagrams, not a phase, not one person's role. It is the practice of deciding which tensions to accept. Organizations and systems co-evolve. The architect's value lies in building that capacity in teams, not owning every decision.
This is the answer that took almost four months to write: why the question resists a tidy definition, and why that resistance is the most honest signal we have about where architecture actually lives.</description><enclosure url="https://blog.talentlms.io/images/posts/what-is-architecture.png" type="image/png"/><media:content url="https://blog.talentlms.io/images/posts/what-is-architecture.png" medium="image"><media:title type="plain">A fork in the road: one path splitting into six branches, representing optionality and the capacity to choose direction.</media:title><media:description type="plain">A fork in the road: one path splitting into six branches, representing optionality and the capacity to choose direction.</media:description></media:content><content:encoded><![CDATA[<img src="https://blog.talentlms.io/images/posts/what-is-architecture.png" alt="A fork in the road: one path splitting into six branches, representing optionality and the capacity to choose direction." style="max-width: 100%; height: auto; margin-bottom: 1.5em;" /><p style="margin-bottom: 1.5em; padding: 1em; background-color: #f5f5f5; border-left: 4px solid #0066cc;"><strong>Yannis Rizos</strong>, Chief Software Architect<br/>Yannis discovered programming at age 7. Soon after, he encountered Larry Wall's three virtues of laziness, impatience, and hubris, principles that have guided his approach to software development ever …</p><p>The deceptively simple question came at <a href="https://open-conf.gr/" target="_blank" rel="noopener noreferrer">Open Conf</a> last November, when a curious young engineer approached me at the conference&rsquo;s mentorship corner. We were going to get a fresh cup of coffee when she asked: <strong>What is architecture?</strong></p>
<p>It should have been easy terrain for someone who has spent years inside the problem, but what came out was a few sentences that gestured at decisions and trade-offs without ever landing anywhere. She nodded in the way people nod when they are being polite to someone who has just disappointed them. Almost four months later, I am still turning the question over, which is either embarrassing or instructive.</p>
<p>I have decided to treat it as the latter and write up what I owe her.</p>
<h2 id="the-short-answer-problem">The Short Answer Problem</h2>
<p>The resistance to a clean answer shows up immediately when you go looking for a definition. The closest thing to a satisfying formulation belongs to Ralph Johnson, one of the four authors of <em>Design Patterns</em>, the book that shaped how the industry thinks about software structure. After all that engagement with the problem, Johnson arrived at something that sounds almost flippant:</p>
<blockquote>
<p>Architecture is about the important stuff, whatever that is.</p>
</blockquote>
<p>Martin Fowler, who often returns to this line, treats the deliberate vagueness as a feature rather than a bug. The longer I&rsquo;ve spent on architectural problems, the more I think he is right.</p>
<p>Whenever I&rsquo;ve tried to give a more precise version of that formulation, something slips. &ldquo;Architecture is about structure&rdquo; works until you notice that a pile of undifferentiated code has structure too, and most of it isn&rsquo;t architectural. &ldquo;Architecture is about decisions,&rdquo; works until you ask which ones count, and where you draw that line depends entirely on context. &ldquo;Architecture is the decisions that are hard to reverse&rdquo; is closer, but reversibility is partly a function of how the rest of the system is built, which puts you in a loop before you&rsquo;ve arrived anywhere.</p>
<p>I&rsquo;m not alone in this struggle. The Software Engineering Institute maintains a compiled <a href="https://www.sei.cmu.edu/library/what-is-your-definition-of-software-architecture/" target="_blank" rel="noopener noreferrer">catalogue of definitions</a> across modern, classic, and bibliographic sources, and the fact that such a catalogue exists and keeps growing is itself data. That is not a gap in the literature waiting to be filled. It is signal about the nature of the thing.</p>
<h2 id="what-architecture-is-not">What Architecture Is Not</h2>
<p>The space left by every failed definition does not stay empty for long. A few specific misconceptions keep filling it, and each one was invited by some shorter version that came before it.</p>
<p>The first, my favorite, is that architecture is the diagrams. UML, C4 models, whiteboard sessions and ADRs are representations of architecture, and useful ones, but they are not the thing itself. A decision shapes what is possible, what is difficult, and what is effectively ruled out, regardless of whether anyone drew it on a whiteboard or wrote it down. You can tear up the whiteboard. The structural commitment has already been made.</p>
<p>The second misconception is that architecture is a phase. You do the architecture work, hand it off, and then engineering builds the thing. That model comes from construction, where you can hand off a completed design and step back, but software never reaches a completion state. It keeps changing; the structure that shapes it needs to keep being shaped, and a system left without active tending decays. Entropy is not a metaphor for codebases. It is the normal trajectory of any system that nobody is attending to.</p>
<p>The third misconception is that architecture lives in a role. One person or a small group owns the structural decisions, and everyone else builds inside them without needing to understand or influence them. This is the most consequential of the three because it concentrates exactly the wrong thing in exactly the wrong place. An architect who owns all the decisions has also insulated themselves from the feedback that would improve those decisions. The engineers closest to implementation are the ones who will live with the consequences, and cutting them out of the process means cutting out the most important signal in the room.</p>
<h2 id="deciding-which-tension-to-accept">Deciding Which Tension to Accept</h2>
<p>Clearing those misconceptions away is useful, but it still leaves the original question open. What I&rsquo;ve found myself reaching for, when people push past the misconceptions and ask what architecture actually is, is this:</p>
<blockquote>
<p>The practice of deciding which tensions to accept and which to release.</p>
</blockquote>
<p>Every structural decision trades something: consistency against availability, deployability against performance, team autonomy against system coherence, and there is no configuration that resolves all of these simultaneously.</p>
<p>If you think you&rsquo;ve found something without a trade-off, you just haven&rsquo;t found it yet. Nobody opts out.</p>
<p>The right trade-off is always context-specific. The skill is not knowing the right tension to accept in the abstract but recognizing which tensions a specific context can absorb, which ones will compound painfully over time, and being deliberate about the choice rather than stumbling into it.</p>
<p>That relocation is the point. As the old adage goes, complexity cannot be destroyed, only relocated. The architect&rsquo;s job is to decide where it lives and why that location is better than the alternatives, and no short answer can carry that much context without losing most of what matters.</p>
<h2 id="the-organization-is-the-architecture">The Organization Is the Architecture</h2>
<p>One of the things it loses almost every time is the organizational dimension, and that loss is not trivial. Any definition of architecture that stops at the technical is incomplete. In 1968, Melvin Conway made an observation that reads almost like a complaint about a frustrating project: any organization that designs a system will produce a design whose structure is a copy of the organization&rsquo;s communication structure.</p>
<p>This may be hard to believe today, but Harvard Business Review actually rejected the paper (for lack of evidence). Decades later, a <a href="https://www.hbs.edu/faculty/Pages/item.aspx?num=32217" target="_blank" rel="noopener noreferrer">Harvard Business School study</a> confirmed exactly what Conway described, and subsequent research at MIT, the University of Maryland, and Tampere University of Technology validated it independently.</p>
<p>Conway illustrated the dynamic with a story from a compiler project he assigned to eight engineers, five to COBOL and three to ALGOL. Nobody decided how many phases each compiler would have, yet the COBOL compiler ended up with five phases and the ALGOL compiler with three. The structure of the output matched the structure of the team that built it, as an unintended emergent outcome, without anyone planning it.</p>
<p>If you&rsquo;ve spent time inside a large engineering organization and wondered why the system looks the way it does, that story will feel less surprising than it should. The uncomfortable version of the observation is that the relationship runs in both directions. The organization shapes the system, and then the system shapes the organization. Teams form around modules, and modules persist because teams formed around them. Technical and organizational concerns co-evolve in ways that make it impossible to cleanly separate them, which is another reason any short definition falls apart. It leaves out half of what is actually happening.</p>
<h2 id="separating-decision-from-construction">Separating Decision from Construction</h2>
<p>That missing half also reshapes how we need to think about the architect&rsquo;s role. The broken metaphor at the center of how the field thinks about the architectural role is the building architect: design first, hand off to construction, step back.</p>
<p>The word itself comes from the Greek <em>arkitekton</em>, meaning chief builder, and the original was not a separate designer standing apart from construction. The chief builder was the most skilled person on the site, someone who understood the full problem from the inside. The software industry borrowed the building architecture metaphor wholesale and quietly erased that origin, replacing it with a designer insulated from consequence.</p>
<p>Erik Dörnenburg identified this as a <a href="https://erik.doernenburg.com/2014/12/new-recording-of-architecture-without-architects/" target="_blank" rel="noopener noreferrer">structural information problem</a>, not a cultural one. When the decision-maker is systematically insulated from the feedback loop that would otherwise correct bad choices, the design drifts from the reality it is supposed to serve. The distinction Dörnenburg draws is between being aware of consequences and having to live with them, and that gap is where architectural decisions quietly degrade. The moment you separate design from construction, you are separating decision from consequence.</p>
<p>This leads to a formulation that still surprises people when they hear it:</p>
<blockquote>
<p>An architect&rsquo;s value is inversely proportional to the number of decisions they make.</p>
</blockquote>
<p>The goal of architectural leadership is to build the capacity for good structural decisions to happen across the teams doing the work, not to own those decisions permanently. This often gets misread as architecture without accountability, which couldn&rsquo;t be further from the truth. It requires far more architectural maturity, not less, because teams need to maintain decision records, argue trade-offs honestly, and hold themselves to a standard.</p>
<p>The concept only works where that maturity exists, and building that maturity is itself one of the architect&rsquo;s main jobs.</p>
<h2 id="the-elevator-and-the-engine-room">The Elevator and the Engine Room</h2>
<p>What that job looks like across an entire organization is something Gregor Hohpe captures in a <a href="https://martinfowler.com/articles/architect-elevator.html" target="_blank" rel="noopener noreferrer">metaphor</a> I&rsquo;ve returned to more than almost anything else in this field. Large organizations are tall buildings. The IT engine room is in the basement: the systems, the infrastructure, the code. The executive penthouse is at the top: the strategy, the resourcing decisions, the market bets. Between them are floors of management, and each floor is a translation layer where information degrades as it moves in either direction, telephone game dynamics at the organizational scale. The architect&rsquo;s job is to <em>ride the elevator</em>, carrying meaning intact in both directions across those translation layers.</p>
<p>I run an <a href="https://esilva.net/amet" target="_blank" rel="noopener noreferrer">Architecture Modernization Enabling Team (AMET)</a> at Epignosis, and this is what the work actually looks like in practice. The problems that determine whether a modernization succeeds are not purely technical. They are questions about which teams have capacity for which changes, how organizational constraints shape what sequences of work are even possible, and where leadership understanding needs to deepen before a technical choice can be made safely.</p>
<p>Architecture that stays in the engine room is working with half its inputs. Hohpe puts it plainly:</p>
<blockquote>
<p>Excessive complexity is nature&rsquo;s punishment for organizations that are unable to make decisions.</p>
</blockquote>
<p>Architecture is also about options, the right to defer a decision while locking in key parameters. In volatile conditions option value increases, and a system locked by deep coupling has had its options foreclosed. Modernization, in this framing, is not paying off the past. It is rebuilding the capacity to choose.</p>
<p>That framing also redefines what an enabling team is for. An enabling team exists to build capability in the teams doing the product work, not to own the work permanently.</p>
<p>The AMET model applies this logic specifically to modernization, as a bell curve of involvement that increases as capability is built and decreases as teams internalize it, until the enabling team eventually dissolves. That lifecycle is not the failure mode. The failure mode is an enabling team that never dissolves because it keeps doing the work instead of transferring the capacity.</p>
<p>There is a second failure mode that gets less attention, which is the team that dissolves before the capability transfer is genuine. The downslope of the bell curve only resolves correctly when the skills and confidence are actually there, and premature dissolution looks like success until the teams are on their own and discover what they did not actually internalize.</p>
<p>Both these failure modes point at the same thing from different directions: architecture done well is partly an exercise in making itself unnecessary. That is not something you can fit into two sentences without losing everything that makes it true.</p>
<h2 id="the-foundation-and-what-it-enables">The Foundation and What It Enables</h2>
<p>What it makes possible is the part of the argument that gets framed backwards almost every time I hear it.</p>
<p>The common version treats modernization as competing with innovation, time spent on the foundation is time not spent building new things, and every sprint on the former feels like something stolen from the latter.</p>
<p>What this misses is that a system that has not been modernized does not just move slowly. It actively constrains which questions engineers are allowed to ask, and when every change requires deep knowledge of how the system currently holds together, the mental load shifts from &ldquo;what should we build?&rdquo; to &ldquo;what can we build without breaking everything?&rdquo; That is not a resource problem. It is a cognitive constraint that narrows the product imagination of the entire organization.</p>
<p>What modernization actually enables is <em>optionality</em>: the capacity to change direction without foreclosing the future, to experiment in one slice of the system without risking another, to run multiple hypotheses simultaneously because the boundaries are clean enough to hold them.</p>
<p>The features you could not build on the old foundation leave no trace, and nobody wrote them on a roadmap. The innovation that never happened is invisible, which is precisely why modernization is chronically undervalued: <strong>its benefits are counterfactual, and counterfactuals do not appear in sprint reports.</strong></p>
<h2 id="the-question-persists">The Question Persists</h2>
<p>The same invisible cost applies to the question itself. An engineer without language for what architecture is will still make structural decisions, and that gap accumulates silently, below the threshold of any report, in exactly the same way as the features that never got built.</p>
<p>Which brings me to the part where I go against everything in this article and add to the pile:</p>
<blockquote>
<p>Architecture is the practice of maintaining the conditions under which better decisions remain possible.</p>
</blockquote>
<p>You can probably drill more holes in it than in most short definitions. But it happens to be the one I <em>like</em> the most, and the one I wish I had at the ready four months ago. It would not have been a <em>satisfactory</em> answer, of course.</p>
<p>But it might, just might, have saved me from that polite nod.</p>
]]></content:encoded></item><item><title>I Generated Six Learning Roadmaps and Didn't Finish Any of Them</title><link>https://blog.talentlms.io/posts/slow-learning/</link><pubDate>Mon, 16 Feb 2026 00:00:00 +0000</pubDate><dc:creator>Christina Koleri</dc:creator><guid>https://blog.talentlms.io/posts/slow-learning/</guid><description>I generated six AI-powered learning roadmaps for Go in a year. Each one was well-structured, personalized, and genuinely good. None survived past day three.
This isn't about AI being bad for learning. It's about a specific trap: when everything accelerates, starting over becomes frictionless, and depth never has a chance.</description><enclosure url="https://blog.talentlms.io/images/posts/slow-learning.png" type="image/png"/><media:content url="https://blog.talentlms.io/images/posts/slow-learning.png" medium="image"><media:title type="plain">Two abstract painted figures sitting together reading from a shared page, surrounded by faint scattered map-like forms on a warm background</media:title><media:description type="plain">Two abstract painted figures sitting together reading from a shared page, surrounded by faint scattered map-like forms on a warm background</media:description></media:content><content:encoded><![CDATA[<img src="https://blog.talentlms.io/images/posts/slow-learning.png" alt="Two abstract painted figures sitting together reading from a shared page, surrounded by faint scattered map-like forms on a warm background" style="max-width: 100%; height: auto; margin-bottom: 1.5em;" /><p style="margin-bottom: 1.5em; padding: 1em; background-color: #f5f5f5; border-left: 4px solid #0066cc;"><strong>Christina Koleri</strong>, Software Engineer<br/>Christina started as a mechanical engineer before computers won her over. Annoyingly curious by nature, she believes the best solutions are usually the simplest ones — they're just harder to find. …</p><p>I have generated six different AI-powered learning roadmaps for Go in the past year. Each one was well-structured, personalized, and genuinely good. None of them survived past day three.</p>
<p>This isn&rsquo;t a post about AI being bad for learning. It&rsquo;s about a specific trap I kept falling into.</p>
<h2 id="everything-is-faster-now">Everything Is Faster Now</h2>
<p>Everything in tech moves faster than it used to. That&rsquo;s not controversial. We write code faster. We ship faster. We find answers faster. The tools are better, AI is accelerating everything, and the industry moves at a pace that would have been unrecognizable five years ago.</p>
<p>And we&rsquo;ve adapted. We genuinely pick things up faster than developers did a decade ago. That&rsquo;s real.</p>
<p>But speed has a cost. When everything around you accelerates, you feel the pressure to accelerate too. New framework drops and you haven&rsquo;t tried it yet. A colleague mentions a tool you&rsquo;ve never heard of and suddenly you feel behind. The industry moves and you need to move with it, or at least that&rsquo;s what it feels like. There&rsquo;s this quiet anxiety humming in the background. Whether you&rsquo;re learning the right thing (whatever &ldquo;right&rdquo; even means), whether you&rsquo;re fast enough, whether you&rsquo;ll become irrelevant if you stop moving.</p>
<p>And in that rush to keep up, something gets lost. We reach for tools and frameworks and concepts without fully understanding them. We get surface-level familiarity with twelve things instead of real understanding of one. We consume explanations that make perfect sense in the moment and evaporate by the next morning. I was no exception. I was moving fast. But I&rsquo;m not sure I was actually getting anywhere.</p>
<p>This is especially painful if your brain is wired the way mine is. I studied mechanical engineering before I got into software. I didn&rsquo;t end up working as one, but the training left a permanent mark. When I encounter something new, I can&rsquo;t just use it. I need to understand how it works. Not the API. The mechanism. Why it was designed that way. What&rsquo;s happening underneath. That need for depth is constantly at war with the pressure to go wide and fast. The result is a person who starts a lot of things and finishes approximately none of them.</p>
<h2 id="the-loop">The Loop</h2>
<p>I&rsquo;ve been writing PHP for about four years. A LOT of PHP. But I&rsquo;ve had this itch to learn Go because I want to understand how the infrastructure tools I use every day actually work under the hood. The CLI tools, the CI/CD pipelines, the container orchestration, the devops stuff. Look at the cloud-native ecosystem and it&rsquo;s Go everywhere: Docker, Kubernetes, Terraform, Prometheus, Helm. It&rsquo;s the language that most of this world is built in.</p>
<p>Here&rsquo;s how it always went. I&rsquo;d get excited. I&rsquo;d open Claude or ChatGPT and say &ldquo;create me a roadmap to learn Go.&rdquo; I&rsquo;d tweak it a bit, make it mine. Plan ready. Time to learn.</p>
<p>Day one: I&rsquo;m in the docs, writing little programs, feeling great.</p>
<p>Day two: setting up my editor, installing tools, taking notes.</p>
<p>Day three: I stumble on something completely unrelated. Maybe someone posts their Raspberry Pi homelab setup and I think &ldquo;I should build one of those.&rdquo; Maybe I read about Rust and think &ldquo;wait, should I learn Rust instead?&rdquo; Maybe a new DevOps tool shows up on Hacker News. And suddenly I&rsquo;m down THAT rabbit hole, generating THAT roadmap, and Go is sitting in a forgotten terminal tab.</p>
<p>A month later, I&rsquo;d circle back. &ldquo;OK, for real this time.&rdquo; New roadmap. New plan. Same result.</p>
<p>Six times.</p>
<p>And AI made the cycle frictionless. Thirty seconds and I have a beautiful, comprehensive plan for literally anything I&rsquo;m curious about. The barrier to starting something new is zero. Which means the barrier to abandoning the previous thing is also zero.</p>
<p>Each time, I&rsquo;d understand a little bit. Enough to feel like I was making progress. But it was Kahneman&rsquo;s System 1 thinking: fast, intuitive, and shallow. It felt like learning. It wasn&rsquo;t sticking.</p>
<p>Fred Brooks wrote in The Mythical Man-Month that nine women can&rsquo;t make a baby in one month. Some things have an irreducible timeline. Understanding is one of them. You can speed up access to information, but you cannot speed up the moment where your brain actually wrestles with a concept and comes out the other side getting it. That moment takes however long it takes, and I kept trying to rush past it.</p>
<h2 id="how-it-started-the-zed-post">How It Started (The Zed Post)</h2>
<p>Then at some point, Zed&rsquo;s <a href="https://zed.dev/blog/lets-git-together" target="_blank" rel="noopener noreferrer">&ldquo;Let&rsquo;s Git Together&rdquo;</a> blog post came my way. Their team had paired with community contributors for eight weeks and shipped 66 Git improvements. They had a curated project board, weekly pairing sessions, biweekly demo days. People showing up consistently, working on the same thing, keeping each other moving.</p>
<p>Something clicked. Not about the program itself, but about the structure. The regularity. The showing up. Maybe I could find someone to learn with like that.</p>
<p>I found a colleague who was also interested in learning Go. We agreed to jump on a call a few times a week, a couple of hours each time. Loose plan: learn together, maybe eventually contribute to some open source.</p>
<p>First session. No roadmap. No chapter one. We opened a small Go project on GitHub (genuinely not great code, which honestly made it more fun) and just started reading it together. Line by line.</p>
<p>&ldquo;OK so this is the main function I think.&rdquo;<br>
&ldquo;What does that <code>:=</code> thing do?&rdquo;<br>
&ldquo;I think it&rsquo;s like&hellip; shorthand for declaring a variable? Let me check.&rdquo;<br>
&ldquo;Why is THIS function capitalized but that one isn&rsquo;t?&rdquo;<br>
&ldquo;OH. Oh wait. Is that the public/private thing?&rdquo;</p>
<p>When we got stuck we asked Copilot. Got the answer, talked about it until we both understood it, and kept reading. Two people squinting at code and talking out loud.</p>
<p>It was SLOW. Way slower than reading an AI explanation. We spent twenty minutes on things an AI could have explained in twenty seconds. We went on tangents. We got things wrong and had to backtrack.</p>
<p>In a few hours I understood more about Go than in all my fast solo attempts combined. Because when I had to explain something out loud, my brain couldn&rsquo;t coast. I had to organize the thought well enough to say it. And when I got it wrong, my colleague would go &ldquo;hmm, that doesn&rsquo;t sound right&rdquo; and we&rsquo;d dig into it together. That friction, the getting-it-wrong part, is where the understanding actually happened.</p>
<p>But beyond that, something I didn&rsquo;t expect: I was enthusiastic. The feeling of pulling someone forward and being pulled forward at the same time. I couldn&rsquo;t wait for the next session. That had never happened with any roadmap. And when someone is expecting me on Thursday, I can&rsquo;t quietly wander off to research Rust or build a homelab. The commitment acts as a filter on my curiosity. It doesn&rsquo;t kill it (nothing could), but it channels it.</p>
<h2 id="how-its-going">How It&rsquo;s Going</h2>
<p>I still use AI every day. I used it today. I&rsquo;ll use it tomorrow. I literally used AI to help me edit this article.</p>
<p>What I changed is small: I added a regular call with another person learning the same thing. That&rsquo;s it.</p>
<p>We read open source Go projects together. We argue about what the code is doing. Sometimes we spend way too long on something. It&rsquo;s not efficient at all.</p>
<p>Three weeks in, I haven&rsquo;t abandoned Go. I haven&rsquo;t jumped to some new shiny thing. I actually want to show up to the next call. For someone with my track record, that&rsquo;s remarkable.</p>
<p>Maybe the problem was never discipline. Maybe it&rsquo;s that everything around me was telling me to go faster, and I&rsquo;d been listening. Some things can&rsquo;t be rushed. Nine women, one month, no baby.</p>
<p>I didn&rsquo;t need another roadmap. I just needed someone to be confused with, and be slow together.</p>
]]></content:encoded></item><item><title>Selling Tactics Through the Lens of an Engineer</title><link>https://blog.talentlms.io/posts/selling-to-engineers/</link><pubDate>Mon, 02 Feb 2026 00:00:00 +0000</pubDate><dc:creator>Vassilis Poursalidis</dc:creator><guid>https://blog.talentlms.io/posts/selling-to-engineers/</guid><description>Modern B2B sales has evolved from annoying persistence into systematic manipulation, using tactics that exploit identity, manufacture intimacy, and weaponize professional boundaries.
These approaches persist because the conversion rate justifies burning bridges with the majority of prospects.
This article dissects the playbooks from an engineer's perspective, revealing why they work, who they target, and how we can collectively demand better.</description><enclosure url="https://blog.talentlms.io/images/posts/selling-to-engineers.png" type="image/png"/><media:content url="https://blog.talentlms.io/images/posts/selling-to-engineers.png" medium="image"><media:title type="plain">Abstract painted path with a developer trying to avoid the sirens of sales pitches on a warm background</media:title><media:description type="plain">Abstract painted path with a developer trying to avoid the sirens of sales pitches on a warm background</media:description></media:content><content:encoded><![CDATA[<img src="https://blog.talentlms.io/images/posts/selling-to-engineers.png" alt="Abstract painted path with a developer trying to avoid the sirens of sales pitches on a warm background" style="max-width: 100%; height: auto; margin-bottom: 1.5em;" /><p style="margin-bottom: 1.5em; padding: 1em; background-color: #f5f5f5; border-left: 4px solid #0066cc;"><strong>Vassilis Poursalidis</strong>, TalentLMS Engineering Director<br/>Vassilis has nearly 20 years of experience working in diverse projects in the technology industry, with a strong record of leadership and technical expertise.

He is currently an Engineering Director …</p><p>Almost everyone has experienced it: the relentless stream of LinkedIn connection requests, the &ldquo;just following up&rdquo; emails, the messages that start with &ldquo;Hey Vassilis!&rdquo; as if you&rsquo;re old friends. Some are merely persistent. Others cross into territory that feels manipulative, even offensive.</p>
<p>For years, I sat on the receiving end of these approaches, treating them as little more than background noise. My strategy was simple: <strong>ignore and move on.</strong> I viewed these messages as a byproduct of having an online presence; annoying, but harmless. However, lately, the tactics have shifted. I began seeing patterns that moved beyond mere persistence and into calculated manipulation. <strong>The uncomfortable truth? They work.</strong> And they work often enough to justify their continued use.</p>
<p>This crossing of the threshold (from annoying, to deceptive) is what finally prompted me to speak up. It is one thing to be sold to; it is another to be engineered. Let&rsquo;s dissect the mechanics of these modern B2B playbooks, why they persist despite the friction they cause, and what this evolution reveals about the current and future state of sales.</p>
<h2 id="the-standard-playbooks">The Standard Playbooks</h2>
<h3 id="the-multi-touch-sequence">The Multi-Touch Sequence</h3>
<p>This is sales 101: contact a prospect across multiple channels over 2-3 weeks. The pattern is based on the widely-cited statistic that it takes 7-8 &ldquo;touches&rdquo; to get a response from a cold prospect. The persistence isn&rsquo;t personal, it&rsquo;s systematic.</p>
<h3 id="the-value-first-approach">The Value-First Approach</h3>
<p>Rather than leading with a pitch, they offer something first: a relevant article, an invitation to a webinar, a whitepaper, or benchmark report. By &ldquo;providing value&rdquo; upfront, they create a subtle obligation and establish themselves as a helpful resource rather than just another vendor. The ask comes later, after you&rsquo;ve presumably been warmed up by their generosity.</p>
<h3 id="the-problem-aware-template">The Problem-Aware Template</h3>
<p>These messages reference specific pain points supposedly common to software engineering departments at companies like yours:</p>
<blockquote>
<p>I noticed companies like [company name] often struggle with scaling engineering teams / managing technical debt / attracting senior talent&hellip;</p>
</blockquote>
<p>Here&rsquo;s the thing: they&rsquo;re using industry generalizations rather than researching your actual situation.</p>
<h3 id="the-social-proof-play">The Social Proof Play</h3>
<blockquote>
<p>We work with [competitor or similar product], and they&rsquo;ve seen a 40% improvement in deployment velocity.</p>
</blockquote>
<p>This simultaneously creates legitimacy (if our competitors use them, maybe we should) and Fear Of Missing Out - FOMO (are we falling behind?). Sometimes the social proof is genuine. But most of the time they&rsquo;re cherry-picked, exaggerated, or strategically vague.</p>
<h3 id="the-trigger-event-approach">The Trigger Event Approach</h3>
<p>These reps are monitoring your company for signals: funding announcements, job postings for senior engineers, product launches, conference appearances. They time their outreach for moments when you might actually need their solution. You may think this is actually one of the more legitimate approaches (at least they&rsquo;re doing their homework!), but it still means your company&rsquo;s activities and your own personal activities are being fed into sales automation systems across dozens of vendors.</p>
<h3 id="other-common-approaches">Other Common Approaches</h3>
<p>Beyond these core playbooks, you&rsquo;ll encounter several other variations:</p>
<ul>
<li><strong>The &ldquo;Breakup&rdquo; Email</strong>: After no response, they send a final message - &ldquo;I&rsquo;ll assume this isn&rsquo;t a priority and remove you from my list&rdquo; - designed to trigger guilt.</li>
<li><strong>The &ldquo;Quick Question&rdquo; Hook</strong>: Messages with sentences suggesting a brief ask when it&rsquo;s actually a full sales pitch waiting on the other side.</li>
<li><strong>The Survey/Research Request</strong>: &ldquo;We&rsquo;re conducting research on engineering leadership&rdquo;, but the research mysteriously reveals you need their offering.</li>
<li><strong>The Comparison/Audit Offer</strong>: &ldquo;Free assessment&rdquo; of your current infrastructure, that&rsquo;s really just sales qualification disguised as service.</li>
<li><strong>The Latest Hype Integration Play</strong>: &ldquo;Your customers are demanding AI features&rdquo;, the latest hype cycle where every vendor suddenly has an AI solution that will help you add intelligence to your product.</li>
</ul>
<p>And of course countless other baiting techniques are waiting for you out there.</p>
<h2 id="when-playbooks-turn-manipulative">When Playbooks Turn Manipulative</h2>
<p>The tactics above are standard, perhaps even acceptable. But there&rsquo;s a darker set of approaches that cross ethical lines.</p>
<h3 id="the-escalation-ladder-from-polite-to-aggressive">The Escalation Ladder: From Polite to Aggressive</h3>
<p>Watch as the tone shifts across this sequence:</p>
<ul>
<li><strong>Touches 1-2</strong>: Friendly, helpful tone. &ldquo;I&rsquo;d love to show you how we&rsquo;re helping companies like yours&hellip;&rdquo;</li>
<li><strong>Touches 3-4</strong>: Mild pressure. &ldquo;Just circling back on my previous message&hellip;&rdquo;</li>
<li><strong>Touches 5-6</strong>: Guilt or challenge. &ldquo;I haven&rsquo;t heard back - is this not a priority for you right now?&rdquo;</li>
<li><strong>Touches 7+</strong>: Aggressive ultimatum. &ldquo;Should I assume you&rsquo;re not the right person?&rdquo; or &ldquo;Can I close your file?&rdquo;</li>
</ul>
<p>The psychology is deliberate: create urgency or trigger a response through discomfort. Some Sales Development Reps (SDRs) are explicitly taught that a &ldquo;no&rdquo; is better than silence, because it&rsquo;s a response they can log and report to their manager.</p>
<h3 id="the-gatekeeper-bypass">The Gatekeeper Bypass</h3>
<p>You respond to a cold outreach stating that you are not the right person, or perhaps you choose not to respond at all. Instead of accepting this boundary, they counter: &ldquo;I understand - who should I speak with instead?&rdquo; or &ldquo;Would you mind introducing me to the person who handles this?&rdquo;</p>
<p>Think about the audacity here. You have clearly indicated a lack of interest or relevance, yet their response is to dismiss your boundary and <strong>deputize you into their sales force</strong>. By asking for a direct introduction or a colleague&rsquo;s contact details, they are asking you to compromise your internal network for their commercial gain.</p>
<p>In any other context, providing internal contact paths to an unverified external actor would be flagged as a security risk. In sales, it’s framed as &ldquo;helpfulness.&rdquo; You should not, under any circumstances, provide the details of your colleagues to a cold caller. If you genuinely believe an offering has merit, you may choose to discuss this internally with your company and then decide how to proceed.</p>
<h3 id="manufactured-intimacy">Manufactured Intimacy</h3>
<p>Messages that open with &ldquo;Hey [First Name]!&rdquo; and casual language as if you&rsquo;re already acquainted. Or worse: &ldquo;We&rsquo;re delighted to have met you at [conference name]!&rdquo; when you&rsquo;ve never communicated at all.</p>
<p>They&rsquo;re trained to write as if they&rsquo;re continuing an existing relationship rather than initiating a cold contact. It&rsquo;s deliberately deceptive, designed to bypass your mental filters for unsolicited sales approaches.</p>
<h3 id="identity-based-manipulation-where-it-gets-personal">Identity-Based Manipulation: Where It Gets Personal</h3>
<p>This is where sales tactics cross from annoying into genuinely offensive territory.</p>
<p>You receive a message that opens with:</p>
<ul>
<li>&ldquo;As a fellow Greek&hellip;&rdquo; (or even worse, written in Greek)</li>
<li>&ldquo;I noticed we&rsquo;re both from [your hometown]&hellip;&rdquo;</li>
<li>&ldquo;My family is also from [your region of origin]&hellip;&rdquo;</li>
<li>Comments about your name&rsquo;s etymology or ancestral background</li>
</ul>
<p>This isn&rsquo;t accidental. This is weaponizing your identity and heritage as a sales tool.</p>
<p><strong>Why is this particularly egregious</strong>: It exploits cultural norms around helping &ldquo;one of your own.&rdquo; It creates artificial obligation based on shared background. It takes something deeply personal (your heritage, your roots, your identity) and commodifies it for commercial gain.</p>
<p>Some sales trainers actually teach this as &ldquo;finding common ground&rdquo; or &ldquo;building rapport.&rdquo; But there&rsquo;s nothing genuine about it. It&rsquo;s manufacturing false kinship for the explicit purpose of making a sale.</p>
<p>For those of us from smaller ethnic communities or regions, this manipulation is especially invasive. The bonds within these communities are real and meaningful. Using them as a sales tactic is a calculated exploitation of something that is morally wrong.</p>
<h2 id="the-uncomfortable-economics">The Uncomfortable Economics</h2>
<p>Here&rsquo;s the part that&rsquo;s hard to accept: <strong>these tactics work</strong>.</p>
<p>Not on you, perhaps. You may be immune or have developed pattern recognition to understand and avoid these approaches. But they work often enough across the broader target audience to justify their continued use.</p>
<p>Those few opportunities can generate enough potential revenue to make the entire campaign &ldquo;successful.&rdquo; The remaining 99+% who were spammed or offended simply don&rsquo;t matter in this calculation, because as it turns out the ends justify the means. Let that sink in for a bit.</p>
<h3 id="who-responds-to-these-tactics">Who Responds to These Tactics?</h3>
<p>They&rsquo;re effective with:</p>
<ul>
<li><strong>Less experienced engineers</strong> who haven&rsquo;t yet developed pattern recognition for sales approaches</li>
<li><strong>People-pleasers</strong> who feel guilty not responding or uncomfortable refusing to help</li>
<li><strong>Those from cultures with strong in-group obligations</strong> (the ethnicity/ancestry play specifically targets this)</li>
<li><strong>Those who think that responding</strong> will just make it stop (personally, if I have not subscribed to a newsletter, I will not even hit unsubscribe as this is a signal for them that someone is listening)</li>
<li><strong>People who haven&rsquo;t learned to set firm boundaries</strong> with cold outreach</li>
</ul>
<h3 id="the-race-to-the-bottom">The Race to the Bottom</h3>
<p>The companies that use manipulative tactics have optimized for conversion rate, not reputation. They&rsquo;ve calculated that:</p>
<ul>
<li>Burning bridges with 99% of prospects is acceptable collateral damage</li>
<li>Offending people doesn&rsquo;t hurt them (those people weren&rsquo;t going to buy anyway, goes the logic)</li>
<li>The people who respond don&rsquo;t seem to care about the tactics used</li>
</ul>
<p>It&rsquo;s the same economic model as spam email or robocalls. A tiny success rate justifies a massive negative impression on everyone else.</p>
<p>And here&rsquo;s the meta problem: this creates a race to the bottom. Companies see competitors using aggressive tactics and getting results, so they adopt them too. Sales leaders promote SDRs who hit quota using these methods, reinforcing them as &ldquo;best practices.&rdquo; The system perpetuates itself.</p>
<h2 id="the-opportunity-cost-what-youre-not-getting">The Opportunity Cost: What You&rsquo;re Not Getting</h2>
<p>Here&rsquo;s something that makes these tactics even more frustrating: in theory, sales conversations could be valuable learning opportunities. A good salesperson understands their market deeply. They talk to dozens of engineers every week. They see patterns across companies, industries, and use cases. They could offer genuine insights about what your peers are struggling with, what solutions are actually working, and what emerging challenges you should be thinking about.</p>
<p>But that&rsquo;s not what happens with these playbook-driven approaches. The conversation is extractive, not collaborative. They&rsquo;re mining you for information (budget size, decision timeline, pain points they can exploit) while offering nothing substantive in return. The entire interaction is optimized for moving you through their sales funnel, not for mutual value exchange. Also be vigilant with what information you are allowed to share about your company.</p>
<p>This is the real tragedy of modern sales playbooks: they&rsquo;ve eliminated the possibility of unexpected learning. You might engage with a cold outreach hoping to discover something new about your market, a technology, or an approach you hadn&rsquo;t considered. Almost universally, you come away with nothing but time wasted.</p>
<p>Now imagine an interaction where someone reaches out with genuine domain expertise and offers real insights before asking for anything. Not vague numbers, but the actual facts of what the value proposition is. Would you remember them years later?</p>
<h2 id="what-this-means-for-you">What This Means for You</h2>
<p>First, recognize that you&rsquo;re the target of an industrial process. Sales development is now heavily automated, data-driven, and optimized. The personalization you see is often manufactured at scale. The commonalities they mention are frequently mined from your LinkedIn profile by automation tools.</p>
<p>Second, understand that your response (or lack thereof) is a data point in their system. They&rsquo;re iterating on what works. Every A/B test, every message variation, every new manipulation tactic exists because somewhere, at some point, it outperformed the alternative.</p>
<p>Third, accept that the vast majority of these interactions will offer you nothing of value. You&rsquo;re not missing out by ignoring them. The chance that you&rsquo;ll learn something genuinely useful is vanishingly small; not because the products are necessarily bad, but because the sales process is designed for extraction, not education.</p>
<p>Fourth, and most importantly: <strong>you owe these cold outreach attempts nothing</strong>. No response, no explanation, no referral to a colleague. The manufactured intimacy is not real. The shared heritage angle is exploitation, not community. The aggressive follow-ups after you&rsquo;ve said no are not deserving of guilt.</p>
<p>Your attention and your network are valuable. Protect them accordingly and follow up only if you are genuinely interested in their offering.</p>
<h2 id="a-better-way-forward">A Better Way Forward?</h2>
<p>For sales professionals reading this: I understand you have quotas. I understand the pressure you&rsquo;re under. But consider what you&rsquo;re optimizing for. Your help is needed to fix this broken system.</p>
<p>The manipulative tactics I&rsquo;ve described may generate short-term results, but they&rsquo;re burning long-term trust not just to your company but for all B2B outreach. They&rsquo;re training an entire generation of software engineers to automatically dismiss and block sales outreach. They&rsquo;re making it harder for every company out there to get through the noise.</p>
<p>There is a better approach: do real research, reach out with genuine specificity, respect boundaries when set, and build actual relationships rather than manufactured ones. It&rsquo;s slower. It doesn&rsquo;t scale as well. But it works with the prospects who matter most, the true potential buyers with real budgets and real problems.</p>
<p>For engineering fellows: share your experiences. When a vendor uses manipulative tactics, tell your network. When someone reaches out with genuine value and respect for your time, remember them and speak about their company. We can collectively shape what &ldquo;best practices&rdquo; look like by rewarding the behavior we want to see. And remember, those tactics are mostly executed by companies with products and services that do not speak for themselves.</p>
<p>The current system works because we allow it to. We can choose differently.</p>
]]></content:encoded></item><item><title>Every AI Workflow Assumes You're Starting Fresh. I'm Not.</title><link>https://blog.talentlms.io/posts/claude-code-setup-for-brownfield-projects/</link><pubDate>Mon, 19 Jan 2026 00:00:00 +0000</pubDate><dc:creator>Christina Koleri</dc:creator><guid>https://blog.talentlms.io/posts/claude-code-setup-for-brownfield-projects/</guid><description>Most AI coding workflows assume you're building something new from scratch. But what if you're building something that spans multiple existing codebases, each with years of accumulated quirks?
This is how I stopped fighting context loss and built a system that actually works with existing codebases.</description><enclosure url="https://blog.talentlms.io/images/posts/claude-code-setup-for-brownfield-projects.png" type="image/png"/><media:content url="https://blog.talentlms.io/images/posts/claude-code-setup-for-brownfield-projects.png" medium="image"><media:title type="plain">Abstract painted figure placing an anchor mark on a surface with fading forms behind and clear space ahead, representing context preservation across sessions on a warm background</media:title><media:description type="plain">Abstract painted figure placing an anchor mark on a surface with fading forms behind and clear space ahead, representing context preservation across sessions on a warm background</media:description></media:content><content:encoded><![CDATA[<img src="https://blog.talentlms.io/images/posts/claude-code-setup-for-brownfield-projects.png" alt="Abstract painted figure placing an anchor mark on a surface with fading forms behind and clear space ahead, representing context preservation across sessions on a warm background" style="max-width: 100%; height: auto; margin-bottom: 1.5em;" /><p style="margin-bottom: 1.5em; padding: 1em; background-color: #f5f5f5; border-left: 4px solid #0066cc;"><strong>Christina Koleri</strong>, Software Engineer<br/>Christina started as a mechanical engineer before computers won her over. Annoyingly curious by nature, she believes the best solutions are usually the simplest ones — they're just harder to find. …</p><p><em>How I built a simple setup to stop losing context in brownfield projects</em></p>
<hr>
<p>I came back from the holidays and spent almost a full day trying to remember what I was doing.</p>
<p>I&rsquo;m not talking about forgetting how to code. I&rsquo;m talking about the hundreds of small discoveries (quirks, edge cases, &ldquo;oh <em>that&rsquo;s</em> why it does that&rdquo; or &ldquo;this is how it is supposed to work, it&rsquo;s a feature not a bug&rdquo;) that led me toward certain solutions and away from others. I use Claude Code as my pair programming partner. It does the heavy lifting of searching and figuring things out, while I challenge its proposals, find problems, and work through trade-offs together. But all that accumulated knowledge? Gone.</p>
<p>I&rsquo;m a software engineer working on unifying multiple projects that were never designed to talk to each other. Identity brokering, data synchronization, event-driven communication between systems that evolved independently for years.</p>
<p>Here&rsquo;s the fun part: I&rsquo;m touching <strong>five different GitHub repositories</strong>. One is deep legacy, the ancient beast. One is middle-aged legacy, newer but full of quirks. One I don&rsquo;t even build myself; I guide external developers and hope my documentation is clear enough.</p>
<p>🥲</p>
<p>And I&rsquo;d been using Claude Code to help me navigate this mess. It was going great, until it wasn&rsquo;t.</p>
<h2 id="the-documentation-death-spiral">The Documentation Death Spiral</h2>
<p>AI tools make documentation trivially easy. I can generate a <code>README</code>, a design doc, an architecture overview in minutes. And with so much unfamiliar code to cover, I <em>needed</em> that detail: both to understand how these projects work and to capture <em>why</em> we made certain decisions. You know the pattern when such detail doesn&rsquo;t exist: two months from now you&rsquo;re wondering why we did X and not Y, you try the &ldquo;obvious&rdquo; approach, hit a wall halfway through, and suddenly remember &ldquo;oh right, <em>that&rsquo;s</em> why we did it the other way.&rdquo;</p>
<p>But keeping documentation accurate is hard. With such a large surface area (five repos, identity flows, event systems), the details change constantly during a conversation. You start with an approach, hit a limitation, tweak it, hit another, tweak again. By the end, you have a detailed document that&rsquo;s accurate about the <em>outcome</em> but wrong about half the specifics.</p>
<p>Okay, so document at the end of the process instead? But by then, several compactions have happened, and I&rsquo;ve already lost information. I end up repeating myself, re-explaining architecture, re-discovering edge cases. After a while, this became my main bottleneck.</p>
<p>I tried adding instructions to <code>CLAUDE.md</code> telling Claude to update documentation as it worked. I tried using hooks to automatically save state before clearing or compacting. I tried splitting knowledge across multiple markdown files, one per area. I tried consolidating into a single large file. None of it worked.</p>
<p>Even when I got the documentation right in the moment, it doesn&rsquo;t stay right. Two weeks later, someone mentions a constraint I&rsquo;d completely missed, and now that document needs updating. Good luck remembering (you or Claude) that this thing was even documented in the first place.</p>
<p>Long story short, I&rsquo;ve restarted the documentation process from scratch three times now, each with a different approach.</p>
<hr>
<h2 id="understanding-the-actual-problem">Understanding the Actual Problem</h2>
<p>I started reading documentation (the irony) to understand what was happening.</p>
<p><strong>Context compaction is lossy by design.</strong> When conversations get long, Claude Code automatically summarizes older parts to make room. The summary captures the gist, but not the nuance: why you rejected an approach, which edge cases you already handled, what specific constraints led to a decision. After a few compactions, Claude might suggest the exact same tweak you rejected an hour ago. The details that led you to reject it are gone.</p>
<p><strong>Auto-compact can&rsquo;t be customized.</strong> It triggers automatically at ~95% context. You can disable it via <code>/config</code> in the CLI, but then you just hit a hard failure when context fills up. Pick your poison.</p>
<p><strong>Hooks don&rsquo;t help here, at least not yet.</strong> <code>PreCompact</code> hooks only support shell commands, not prompts. You can run a script, but you can&rsquo;t ask Claude to do anything. There&rsquo;s no <code>PreClear</code> hook at all. And built-in slash commands like <code>/clear</code> and <code>/compact</code> bypass <code>UserPromptSubmit</code>, so you can&rsquo;t intercept those either.</p>
<p><strong>CLAUDE.md instructions aren&rsquo;t always followed.</strong> I had instructions telling Claude to update documentation as it worked. Sometimes it did. Often it didn&rsquo;t. Claude tends to treat instructions as suggestions rather than requirements. This is a <a href="https://github.com/anthropics/claude-code/issues/15443" target="_blank" rel="noopener noreferrer">known issue</a> — one user put it bluntly: &ldquo;Does that mean that with each prompt I need to tell you to follow the instructions? &hellip;Yes, unfortunately.&rdquo;</p>
<p><strong>Multiple documentation files don&rsquo;t scale.</strong> New information often needs to update more than one file, and that doesn&rsquo;t always happen. Then Claude references outdated information from the wrong file, and you&rsquo;re back to square one.</p>
<p><strong>A single large documentation file bloats context.</strong> The more detail you add, the more context it eats. And Claude&rsquo;s accuracy degrades as context fills up.</p>
<p><strong>The workflows assume greenfield.</strong> Every tutorial I found follows almost the same pattern: &ldquo;describe your idea and let Claude build it.&rdquo; When you&rsquo;re starting fresh, you can plan everything upfront. No legacy constraints, no mid-stream discoveries that force you to pivot. Nobody&rsquo;s writing about maintaining five existing repos where you&rsquo;re constantly discovering how things actually work, then trying to remember those discoveries next week.</p>
<p>So I stopped looking for the clever solution and started looking for the dumbest thing that might actually work.</p>
<h2 id="what-actually-worked">What Actually Worked</h2>
<p>Instead of relying on <code>CLAUDE.md</code> to automatically update docs (which it often ignores), I took explicit control.</p>
<p><strong>One command: <code>/save</code></strong></p>
<p>That&rsquo;s it. When I want to preserve state, I run <code>/save</code>. It tells Claude to update the status file with current progress, decisions, blockers, and anything we discovered about how the code works. Then I run <code>/clear</code> for a fresh start.</p>
<p>No automation that sometimes works. No hoping Claude remembers. Just a manual habit, and for now, a reliable habit beats unreliable automation. Before any transition (break, end of day, context getting full), run <code>/save</code>, then <code>/clear</code>.</p>
<h2 id="the-actual-setup">The Actual Setup</h2>
<p>Here&rsquo;s what the structure looks like. I&rsquo;ll walk through the important parts.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>workspace/
</span></span><span style="display:flex;"><span>├── PROJECT_STATUS.md           <span style="color:#75715e"># What&#39;s done, in progress, blocked, decisions</span>
</span></span><span style="display:flex;"><span>├── PRD_REQUIREMENTS.md         <span style="color:#75715e"># Current requirements (synced from Confluence)</span>
</span></span><span style="display:flex;"><span>├── CLAUDE.md                   <span style="color:#75715e"># Instructions for Claude</span>
</span></span><span style="display:flex;"><span>├── docs/understanding/         <span style="color:#75715e"># What I&#39;ve learned about each codebase</span>
</span></span><span style="display:flex;"><span>│   ├── project-alpha.md
</span></span><span style="display:flex;"><span>│   ├── project-beta.md
</span></span><span style="display:flex;"><span>│   └── ...
</span></span><span style="display:flex;"><span>├── project-alpha/              <span style="color:#75715e"># Cloned repo</span>
</span></span><span style="display:flex;"><span>├── project-beta/               <span style="color:#75715e"># Cloned repo</span>
</span></span><span style="display:flex;"><span>└── .claude/
</span></span><span style="display:flex;"><span>    ├── settings.local.json     <span style="color:#75715e"># Hooks configuration</span>
</span></span><span style="display:flex;"><span>    ├── commands/
</span></span><span style="display:flex;"><span>    │   ├── save.md             <span style="color:#75715e"># /save command</span>
</span></span><span style="display:flex;"><span>    │   └── refresh-prds.md     <span style="color:#75715e"># /refresh-prds command</span>
</span></span><span style="display:flex;"><span>    └── hooks/
</span></span><span style="display:flex;"><span>        └── session-start.sh    <span style="color:#75715e"># Loads context on startup</span>
</span></span></code></pre></div><h3 id="the-files">The Files</h3>
<p><strong><code>PROJECT_STATUS.md</code></strong> — Not a plan, a snapshot of reality that gets updated as reality changes. Current focus, decisions made (with rejected alternatives), blockers. I also keep a sequence of small, testable tasks: the kind of instructions I&rsquo;d give a junior engineer, or how I&rsquo;d approach the work if I were writing the code myself. Not &ldquo;build the whole feature,&rdquo; but &ldquo;create the route, add a controller that returns hello world, hit the endpoint to verify it works, then add the actual logic.&rdquo; One problem at a time. Small checkpoints where I can test and review each piece before moving on, rather than ending up with ten new files and feeling overwhelmed. Gets loaded into context on every session start via the SessionStart hook.</p>
<p><strong><code>PRD_REQUIREMENTS.md</code></strong> — Current requirements from Confluence, cached locally. Also loaded on session start, so Claude always knows what we&rsquo;re building toward.</p>
<p><strong><code>docs/understanding/</code></strong> — How each codebase actually works. The authentication quirks, the unusual database choices, the non-obvious patterns you only discover by reading the code.</p>
<p><strong><code>CLAUDE.md</code></strong> — Instructions for Claude. I still have them, but they help just not reliably. Things like &ldquo;update PROJECT_STATUS.md as you work&rdquo; and &ldquo;suggest /save when context is getting long.&rdquo; Nice to have, not critical path.</p>
<h3 id="the-hooks">The Hooks</h3>
<p><strong>SessionStart</strong> — When Claude Code starts (including after <code>/clear</code>), this hook loads <code>PROJECT_STATUS.md</code> and <code>PRD_REQUIREMENTS.md</code> into context automatically.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span>  <span style="color:#f92672">&#34;hooks&#34;</span>: {
</span></span><span style="display:flex;"><span>    <span style="color:#f92672">&#34;SessionStart&#34;</span>: [{
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">&#34;matcher&#34;</span>: <span style="color:#e6db74">&#34;startup|resume|clear|compact&#34;</span>,
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">&#34;hooks&#34;</span>: [{
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">&#34;type&#34;</span>: <span style="color:#e6db74">&#34;command&#34;</span>,
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">&#34;command&#34;</span>: <span style="color:#e6db74">&#34;\&#34;$CLAUDE_PROJECT_DIR/.claude/hooks/session-start.sh\&#34;&#34;</span>,
</span></span><span style="display:flex;"><span>        <span style="color:#f92672">&#34;timeout&#34;</span>: <span style="color:#ae81ff">10</span>
</span></span><span style="display:flex;"><span>      }]
</span></span><span style="display:flex;"><span>    }]
</span></span><span style="display:flex;"><span>  }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p>The script just outputs the files to stdout, which goes into Claude&rsquo;s context:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#75715e">#!/bin/bash
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>echo <span style="color:#e6db74">&#34;=== PROJECT STATUS ===&#34;</span>
</span></span><span style="display:flex;"><span>cat <span style="color:#e6db74">&#34;</span>$CLAUDE_PROJECT_DIR<span style="color:#e6db74">/PROJECT_STATUS.md&#34;</span>
</span></span><span style="display:flex;"><span>echo <span style="color:#e6db74">&#34;&#34;</span>
</span></span><span style="display:flex;"><span>echo <span style="color:#e6db74">&#34;=== REQUIREMENTS ===&#34;</span>
</span></span><span style="display:flex;"><span>cat <span style="color:#e6db74">&#34;</span>$CLAUDE_PROJECT_DIR<span style="color:#e6db74">/PRD_REQUIREMENTS.md&#34;</span>
</span></span></code></pre></div><p>Every session starts with context. No re-explaining.</p>
<h3 id="the-commands">The Commands</h3>
<p><strong><code>/save</code></strong> — The core of the whole setup. When I run <code>/save</code>, Claude updates <code>PROJECT_STATUS.md</code> with current progress, decisions, blockers, and next steps. If we discovered how something works internally, it updates the relevant <code>docs/understanding/*.md</code> file too. Then I run <code>/clear</code> for a fresh start.</p>
<p><strong><code>/refresh-prds</code></strong> — We use Confluence for PRDs. Querying it every session is slow, so this command fetches relevant PRD pages via the Atlassian MCP server and saves them to <code>PRD_REQUIREMENTS.md</code> locally.</p>
<h3 id="the-decision-log">The Decision Log</h3>
<p>One of the most valuable parts of <code>PROJECT_STATUS.md</code> is the Decision Log with rejected alternatives:</p>
<table>
  <thead>
      <tr>
          <th>Date</th>
          <th>Topic</th>
          <th>Decision</th>
          <th>Rejected</th>
          <th>Rationale</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>2026-01-08</td>
          <td>User lookup</td>
          <td>By UUID</td>
          <td>By email</td>
          <td>Email can change, UUID is immutable across systems</td>
      </tr>
      <tr>
          <td>2026-01-05</td>
          <td>API auth</td>
          <td>Signed JWT</td>
          <td>API key in header</td>
          <td>Need to pass user claims without extra lookup</td>
      </tr>
  </tbody>
</table>
<p>This prevents Claude from suggesting approaches we already tried and rejected. The &ldquo;Rejected&rdquo; column is critical. Without it, you&rsquo;re one compaction away from re-discovering the same dead ends.</p>
<h2 id="what-still-sucks">What Still Sucks</h2>
<p>I&rsquo;m not going to pretend this is perfect:</p>
<p><strong>It&rsquo;s manual.</strong> I sometimes forget. The habit helps, but it&rsquo;s still on me to remember.</p>
<p><strong>Auto-compact can still surprise you.</strong> If context fills up mid-task, it compacts before you can save. I&rsquo;ve learned to run <code>/save</code> proactively rather than waiting until I&rsquo;m about to leave.</p>
<p><strong>It uses context.</strong> Loading <code>PROJECT_STATUS.md</code> and <code>PRD_REQUIREMENTS.md</code> on every session takes up space. The tradeoff is worth it; the alternative is spending even more context and time re-explaining the same things. But it means keeping these files concise: a snapshot of current state, not a detailed history.</p>
<p><strong>SessionStart has bugs.</strong> Output sometimes (rarely but it happens) doesn&rsquo;t inject after compact.</p>
<h2 id="the-results">The Results</h2>
<p>After a few weeks, this has worked unexpectedly well:</p>
<ul>
<li><strong>New sessions start with context.</strong> No more re-explaining the architecture or the end goal.</li>
<li><strong>Decisions and rationale are preserved.</strong> Including what we rejected and why.</li>
<li><strong>Failed approaches are documented.</strong> No re-trying things that didn&rsquo;t work.</li>
<li><strong>Implementation knowledge accumulates.</strong> The understanding docs grow over time.</li>
<li><strong>Coming back after time off works.</strong> The status file is a reliable snapshot.</li>
<li><strong>PRDs stay current.</strong> One command to sync from Confluence.</li>
</ul>
<h2 id="the-bigger-lesson">The Bigger Lesson</h2>
<p>With something really simple, I solved 80% of my problem.</p>
<p>A <code>SessionStart</code> hook that loads a couple of markdown files. Two commands that are just instructions for what to update. That&rsquo;s it. No clever automation, no complex tooling, just files and habits.</p>
<p>But this simple foundation gives me something to build on. I&rsquo;m already working on a tool to share with my team: something configurable where each developer can define their own files, their own instructions, their own context to keep up to date. What started as a personal workaround is turning into team tooling. And building it has taught me a ton about how AI tools actually work under the hood.</p>
<p>Interestingly, Claude Code itself started this way — as a simple internal tool that Anthropic engineers built for themselves. It spread through the company because it solved a real problem, and they debated whether to release it publicly or keep it as their &ldquo;secret sauce.&rdquo;</p>
<p>Most AI workflows assume you&rsquo;re building something new. Most of us aren&rsquo;t. We&rsquo;re maintaining, integrating, extending, debugging (often messy) codebases that evolved over years.</p>
<p>If the tools don&rsquo;t exist for your situation, build the simplest thing that works. You might be surprised where it leads.</p>
<hr>
<p><em>If you&rsquo;re dealing with similar problems (legacy codebases, context loss, documentation that can&rsquo;t keep up), I&rsquo;d love to hear how you&rsquo;re handling it.</em></p>
]]></content:encoded></item><item><title>From Friction to Flow: Rethinking Code Reviews</title><link>https://blog.talentlms.io/posts/pair-code-reviews/</link><pubDate>Mon, 12 Jan 2026 00:00:00 +0000</pubDate><dc:creator>Christos Xanthos</dc:creator><guid>https://blog.talentlms.io/posts/pair-code-reviews/</guid><description>Code reviews can feel slow and effort-heavy, yet a subtle shift transforms the whole experience.
By bringing people together in a pair setting - live, synchronous conversations - teams unlock sharper insight, quicker alignment, and smoother momentum.
A glimpse into why this approach quietly outpaces the usual flow.</description><enclosure url="https://blog.talentlms.io/images/posts/pair-code-reviews.png" type="image/png"/><media:content url="https://blog.talentlms.io/images/posts/pair-code-reviews.png" medium="image"><media:title type="plain">Abstract painted composition showing fragmented disconnected forms transitioning to smooth flowing connected brushstrokes on light background</media:title><media:description type="plain">Abstract painted composition showing fragmented disconnected forms transitioning to smooth flowing connected brushstrokes on light background</media:description></media:content><content:encoded><![CDATA[<img src="https://blog.talentlms.io/images/posts/pair-code-reviews.png" alt="Abstract painted composition showing fragmented disconnected forms transitioning to smooth flowing connected brushstrokes on light background" style="max-width: 100%; height: auto; margin-bottom: 1.5em;" /><p style="margin-bottom: 1.5em; padding: 1em; background-color: #f5f5f5; border-left: 4px solid #0066cc;"><strong>Christos Xanthos</strong>, Lead Software Engineer<br/>Christos has more than 15 years of experience developing internet services and over a decade shaping e-learning trends.

Watching young people grow and shine brings a smile to his face - whether …</p><blockquote>
<p><em>&ldquo;Are code reviews even worth it?&rdquo;</em><br>
<em>&ldquo;A fast, reliable CI suite would make them obsolete&rdquo;</em><br>
<em>&ldquo;Code reviews should be automated, not manual&rdquo;</em></p>
</blockquote>
<p>Relax - this is <strong>not</strong> that article.</p>
<p>We&rsquo;ve already made our choice: we do code reviews. Human code reviews. Full stop.</p>
<p>Instead, here&rsquo;s the real question we want to explore: <strong>If code reviews are already part of your workflow, how can you make them better?</strong></p>
<p>It turns out the answer has less to do with the code and more to do with the people writing it. How a small shift in how we review code leads to faster decisions, fewer misunderstandings, and better collaboration.</p>
<h2 id="the-reality-of-our-code-legacy-quirks-and-a-little-mystery">The Reality of Our Code: Legacy, Quirks, and a Little Mystery</h2>
<p>Every codebase has a personality. History baked into it.<br>
Ours? Let&rsquo;s say we support diversity. Some parts are elegant. Others&hellip; well, they feel like someone left cryptic clues for future generations. Archaeological findings from a civilization that didn&rsquo;t believe in documentation.</p>
<p>If any of this sounds familiar, congratulations: you also work with legacy code.</p>
<p>When code has this much backstory, reviewing changes stops being a matter of scanning the lines. It becomes a search for context - hidden, implied, or half-forgotten.</p>
<p>And this is where collaboration, real collaboration, starts to shine.</p>
<h2 id="where-pair-programming-fits-in">Where Pair Programming Fits In</h2>
<p>For developers still building their seniority, pairing is one of the fastest ways to learn. It’s an immersive way to learn the codebase, the domain, the quirks, the conventions, and the “why does this function behave like that?” questions no onboarding handbook can truly capture.</p>
<p>And speaking of onboarding:</p>
<blockquote>
<p><strong>Onboarding Reality</strong><br>
One of our onboarding exercises involves opening a seemingly innocent controller file.<br>
By line 30, new developers usually ask, “Wait, why is <em>that</em> happening here? Isn’t this a controller?”<br>
And that’s when onboarding really begins.</p>
</blockquote>
<p>But we don&rsquo;t pair only for mentoring juniors or onboarding. We do use it to guide new hires through our legacy codebase and navigate risky code that catches people off guard, but also for complex features and critical architectural decisions that are too risky to tackle alone.</p>
<h2 id="but-pair-programming-isnt-always-an-option">But Pair Programming Isn’t Always an Option</h2>
<p>Even when pairing makes sense technically, life gets in the way. Calendars clash. Deep focus is needed. The change is small. Or, honestly, we’re just not in the mood to narrate every mental step - no judgment.</p>
<p>Pair programming isn’t a default; it’s a tool.<br>
And tools work best when used intentionally.</p>
<p>So when pairing isn’t feasible but collaboration still <em>matters</em>, we rely on the next best thing.</p>
<h2 id="enter-pair-code-reviews-collaboration-without-the-calendar-gymnastics">Enter Pair Code Reviews: Collaboration Without the Calendar Gymnastics</h2>
<p>Pair code reviews take the essence of pairing (discussion, alignment, shared understanding) and apply it to the review stage.</p>
<p>The process is simple:</p>
<ol>
<li>Screen shared.</li>
<li>Questions asked and answered instantly.</li>
<li>Reviewer and author discuss decisions, alternatives, trade-offs, and risks.</li>
<li>They refine the code <em>together</em>.</li>
</ol>
<p>Here&rsquo;s the difference in velocity:</p>
<p><strong>Async review timeline:</strong></p>
<ul>
<li>Monday: PR created</li>
<li>Tuesday: Reviewer asks questions</li>
<li>Wednesday: More questions, more answers</li>
<li>Thursday: Changes pushed</li>
<li>Friday: Approval</li>
<li>Monday: Merge</li>
</ul>
<p><strong>Pair review timeline:</strong></p>
<ul>
<li>Thursday 14:00 → call</li>
<li>14:20 → updates made</li>
<li>14:25 → merged</li>
</ul>
<p><strong>The collaboration loop:</strong></p>
<pre><code>Author explains → Reviewer questions → Discuss → Improve → Approve → Merge
</code></pre>
<h2 id="why-pair-reviews-outperform-classic-async-reviews">Why Pair Reviews Outperform Classic Async Reviews</h2>
<p><strong>Clarity over comment threads.</strong><br>
A short conversation beats days of typed misunderstandings.</p>
<blockquote>
<p><strong>The Comment Novel</strong><br>
Most of us have seen a PR where the review thread was longer than the actual code.<br>
By the end, no one remembered what they were discussing.<br>
A 15-minute pair review would&rsquo;ve saved a week.</p>
</blockquote>
<p><strong>Shared context leads to better decisions.</strong><br>
Reviewers understand not just the code, but the <em>thinking</em> behind the code.</p>
<p><strong>Higher quality feedback.</strong><br>
Design issues, subtle risks, and legacy pitfalls surface more naturally through discussion.</p>
<p><strong>Fewer iterations.</strong><br>
Async reviews can feel like pen-pal correspondence. Pair reviews compress the whole thing into one iteration.</p>
<p><strong>Built-in mentorship.</strong><br>
Everyone leaves the room smarter than they entered.</p>
<p><strong>Less friction, more humanity.</strong><br>
Tone doesn&rsquo;t get misread. Nuance isn&rsquo;t lost.<br>
Communication feels&hellip; normal.</p>
<h2 id="the-trade-offs">The Trade-offs</h2>
<p>Pair code reviews aren&rsquo;t a silver bullet. They require synchronous time - both people need to be available at the same moment. That&rsquo;s not always possible, and it can become a bottleneck if every PR demands a live session.</p>
<p>They work best in teams with a safe culture where junior developers feel comfortable asking questions and senior developers don&rsquo;t dominate the conversation. Without that foundation, the risk of one person steering the entire discussion is real.</p>
<p>And honestly? For tiny, low-risk changes, the overhead isn&rsquo;t worth it. A one-line typo fix doesn&rsquo;t need a 15-minute call.</p>
<h2 id="pros-and-cons-of-pair-code-reviews">Pros and Cons of Pair Code Reviews</h2>
<table>
  <thead>
      <tr>
          <th>PROS</th>
          <th>CONS</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>Real-time clarity</td>
          <td>Needs synchronous time</td>
      </tr>
      <tr>
          <td>Better code quality</td>
          <td>Risk of dominance</td>
      </tr>
      <tr>
          <td>Faster decision-making</td>
          <td>Not ideal for tiny PRs</td>
      </tr>
      <tr>
          <td>Knowledge sharing</td>
          <td>Requires safe culture</td>
      </tr>
      <tr>
          <td>Mentorship</td>
          <td>Can become a bottleneck</td>
      </tr>
  </tbody>
</table>
<h2 id="choosing-between-pair-programming-pair-reviews-and-async-reviews">Choosing Between Pair Programming, Pair Reviews, and Async Reviews</h2>
<p>Our practical decision guide:</p>
<ul>
<li><strong>Use Pair Programming</strong><br>
For complex features, architectural decisions, or exploring legacy areas.</li>
<li><strong>Use Pair Code Reviews</strong><br>
When collaboration is needed but pairing wasn’t possible during implementation.</li>
<li><strong>Use Async Reviews</strong><br>
For small, low-risk changes that don’t require deep discussion.</li>
</ul>
<p>Each tool fits a different situation.<br>
The magic comes from choosing intentionally.</p>
<h2 id="conclusion-the-goal-isnt-perfection-its-better-collaboration">Conclusion: The Goal Isn’t Perfection. It’s Better Collaboration.</h2>
<blockquote>
<p><strong>The Aha! Moment</strong><br>
“I thought this code did X&hellip; then we paired, talked it through, and realized it actually did Y.”<br>
That collective <em>Aha!</em> is where the real value lives.</p>
</blockquote>
<p>Code reviews aren’t just checkpoints - they are opportunities to share knowledge and improve systems together.</p>
<p>Pair code reviews strike a healthy balance: structured enough to keep things moving, collaborative enough to avoid misunderstandings, and flexible enough to fit into real workflows.</p>
<p>Perfect code doesn’t exist.<br>
But <strong>better collaboration</strong> does - and pair reviews help us get there.</p>
]]></content:encoded></item><item><title>Spec-Driven Development</title><link>https://blog.talentlms.io/posts/spec-driven-development/</link><pubDate>Mon, 05 Jan 2026 00:00:00 +0000</pubDate><dc:creator>Evangelos Kalosynakis</dc:creator><guid>https://blog.talentlms.io/posts/spec-driven-development/</guid><description>In a world where AI writes code at lightning speed, the bottleneck has shifted from typing to thinking.
Spec-Driven Development (SDD) puts requirements and behavior specifications at the center of the development process, ensuring that what we build is what we actually need — before a single line of code is written.</description><enclosure url="https://blog.talentlms.io/images/posts/spec-driven-development.png" type="image/png"/><media:content url="https://blog.talentlms.io/images/posts/spec-driven-development.png" medium="image"><media:title type="plain">Spec-Driven Development workflow showing four stages: Specify, Plan, Tasks, and Implement with icons for each step</media:title><media:description type="plain">Spec-Driven Development workflow showing four stages: Specify, Plan, Tasks, and Implement with icons for each step</media:description></media:content><content:encoded><![CDATA[<img src="https://blog.talentlms.io/images/posts/spec-driven-development.png" alt="Spec-Driven Development workflow showing four stages: Specify, Plan, Tasks, and Implement with icons for each step" style="max-width: 100%; height: auto; margin-bottom: 1.5em;" /><p style="margin-bottom: 1.5em; padding: 1em; background-color: #f5f5f5; border-left: 4px solid #0066cc;"><strong>Evangelos Kalosynakis</strong>, Software Engineer<br/>Vaggelis is a back-end software engineer, coming from a full stack background with experience in multiple technologies. Currently working in TalentLMS and is passionate about trying new technologies …</p><p>There are plenty of resources explaining what Spec-Driven Development (SDD) is and why it was created, but rarely do they cover how to actually use it and what problems it solves in practice.</p>
<p>For reference, here&rsquo;s a solid definition from <a href="https://github.blog/ai-and-ml/generative-ai/spec-driven-development-with-ai-get-started-with-a-new-open-source-toolkit/" target="_blank" rel="noopener noreferrer">GitHub&rsquo;s blog</a>:</p>
<blockquote>
<p>Instead of coding first and writing docs later, in spec-driven development, you start with a (you guessed it) spec. This is a contract for how your code should behave and becomes the source of truth your tools and AI agents use to generate, test, and validate code. The result is less guesswork, fewer surprises, and higher-quality code.</p>
</blockquote>
<p>But what does this mean for your daily AI usage? What can SDD do that you can&rsquo;t achieve with AI on your own?</p>
<p>The answer: it can do what you already do, but in a far more organized and predictable way.</p>
<h2 id="getting-started-with-openspec">Getting Started with OpenSpec</h2>
<p>The key lies in the specification and supporting documentation. We chose <a href="https://github.com/Fission-AI/OpenSpec" target="_blank" rel="noopener noreferrer">OpenSpec</a> because it&rsquo;s as simple as it gets — it helps you get acquainted with the process without mental overload.</p>
<p>After installing the tool, run <code>openspec init</code> to create the necessary files. The only thing you need to provide is context for your project in <code>/openspec/project.md</code>, which you can do by using the AI Agent. OpenSpec, after installation, also gives you the prompt to do this: <code>&quot;Please read openspec/project.md and help me fill it out with details about my project, tech stack, and conventions&quot;</code>.</p>
<p>This file serves as the main entry point for your AI agent, giving it full context without declaring it on every prompt. It also acts as guardrails, keeping the agent within your project&rsquo;s scope.</p>
<h2 id="the-four-prompt-workflow">The Four-Prompt Workflow</h2>
<p>Once set up, SDD boils down to four prompts:</p>
<ol>
<li><strong>&ldquo;Please create the change proposal for [FEATURE]&rdquo;</strong></li>
<li><strong>&ldquo;Let&rsquo;s update the change proposal with [CHANGE]&rdquo;</strong></li>
<li><strong>&ldquo;I&rsquo;m happy with the change proposal. Let&rsquo;s proceed with the implementation&rdquo;</strong></li>
<li><strong>&ldquo;I&rsquo;m happy with the implementation, let&rsquo;s archive the proposal&rdquo;</strong></li>
</ol>
<p>That&rsquo;s the essence of working with SDD. Since all documentation is provided upfront, you don&rsquo;t need to repeat references. Just describe your feature in as much detail as possible, review the generated proposal, refine it if needed, then approve and implement. If the result isn&rsquo;t satisfactory, you can always point the agent back to the proposal for guidance. When you&rsquo;re happy, archive it and move on.</p>
<h2 id="why-this-beats-blind-prompting">Why This Beats Blind Prompting</h2>
<ol>
<li><strong>No fighting the AI</strong> — Guidelines are baked in, so you don&rsquo;t need to repeat them on every prompt</li>
<li><strong>No guessing</strong> — You know exactly what the AI will implement before reviewing code</li>
<li><strong>Catch issues early</strong> — Reading the proposal surfaces problems before they become code</li>
<li><strong>Iterate with confidence</strong> — Strictly defined guidelines make refinement predictable</li>
<li><strong>Full control over implementation order</strong> — Want TDD? Tests first? Documentation updates? You decide</li>
<li><strong>No repeated context</strong> — Everything is centrally located</li>
<li><strong>Always current</strong> — Keeping documentation updated means the AI always has the latest guidelines</li>
</ol>
<h2 id="does-it-work">Does It Work?</h2>
<p>Yes, it does. Case in point is this very project you&rsquo;re reading. This is an entirely different stack compared to what we use daily, yet we managed to add this article and some features in less than 20 minutes from when we started working on the project — without any prior experience with the implementation stack.</p>
<p>Great documentation already existed, so the AI agent had an easy time filling our <code>project.md</code> file with all the relevant info, which in turn made this a breeze.</p>
<h2 id="how-can-you-make-it-work">How Can You Make It Work?</h2>
<p>One great initiative we had was to share SDD in a workshop with the intention of implementing it in more projects. The workshop included colleagues unfamiliar with our projects to make it interactive and test its effectiveness. We then asked them to implement a feature in a project they weren&rsquo;t fully familiar with.</p>
<p>This has many benefits for everyone:</p>
<ul>
<li>Developers, even when inexperienced, can feel valuable within the company — even when they aren&rsquo;t confident in their knowledge of the project or product</li>
<li>They get a high-level overview of what&rsquo;s needed when implementing something, which reduces cognitive overload since they don&rsquo;t have to get lost in a codebase</li>
<li>They can be more productive even with limited knowledge</li>
<li>They can follow guidelines without the need for external supervision or course correction</li>
<li>Projects can be maintained by more people instead of relying on a select few with experience</li>
</ul>
]]></content:encoded></item><item><title>From “We'll Upgrade Later” to TanStack Query v5: A Human Story About Cleaning Up the Client</title><link>https://blog.talentlms.io/posts/from-update-later-to-tanstack-v5/</link><pubDate>Mon, 15 Dec 2025 00:00:00 +0000</pubDate><dc:creator>Isidoros Lemonidis</dc:creator><guid>https://blog.talentlms.io/posts/from-update-later-to-tanstack-v5/</guid><description>Ever opened a repo, spotted an ancient dependency version, and felt your motivation quietly exit the building? That was me, staring at our client still running on TanStack Query v3.
Here’s the (mostly human) story of how I dragged it to v5 and what it taught me about keeping a codebase clean.</description><enclosure url="https://blog.talentlms.io/images/posts/from-update-later-to-tanstack-v5.jpg" type="image/png"/><media:content url="https://blog.talentlms.io/images/posts/from-update-later-to-tanstack-v5.jpg" medium="image"><media:title type="plain">Close-up of a weathered wall where colorful blue-and-yellow patterned ceramic tiles are cracked and peeling away, revealing rough plaster and red brick underneath.</media:title><media:description type="plain">Close-up of a weathered wall where colorful blue-and-yellow patterned ceramic tiles are cracked and peeling away, revealing rough plaster and red brick underneath.</media:description></media:content><content:encoded><![CDATA[<img src="https://blog.talentlms.io/images/posts/from-update-later-to-tanstack-v5.jpg" alt="Close-up of a weathered wall where colorful blue-and-yellow patterned ceramic tiles are cracked and peeling away, revealing rough plaster and red brick underneath." style="max-width: 100%; height: auto; margin-bottom: 1.5em;" /><p style="margin-bottom: 1.5em; padding: 1em; background-color: #f5f5f5; border-left: 4px solid #0066cc;"><strong>Isidoros Lemonidis</strong>, Frontend Software Engineer<br/>Isidoros is an ever-loving enthusiast of technology, gaming and PC related stuff in general from a young age.

Having joined the company in May, 2023, his main focus is on the client-side of the …</p><p>There’s a special kind of technical debt that doesn’t look like debt at all. The app works. The screens load. Nobody’s yelling. So we tell ourselves the classic lie:</p>
<blockquote>
<p><strong>“It’s fine. We’ll upgrade later.”</strong></p>
</blockquote>
<p>But “later” is where upgrades go to become migrations.</p>
<p>Staying behind on library versions is rarely painful day-to-day. The pain shows up slowly and quietly: docs stop matching what you have, examples on the internet don’t apply, new teammates assume a newer API, and tiny workarounds pile up because “that’s how this old version needs it.” And then one day you’re not upgrading a library, you’re upgrading a whole era of decisions.</p>
<p>That’s where we were: our client repo was still on TanStack Query v3, while the world had moved on to v5.</p>
<h3 id="the-spark-amsterdam-june-2025">The Spark: Amsterdam, June 2025</h3>
<p>This upgrade didn’t start as a roadmap initiative or a “mandatory improvement.” It started in June 2025, at React Summit Amsterdam.</p>
<p>I watched <strong><a href="https://github.com/tannerlinsley" target="_blank" rel="noopener noreferrer">Tanner Linsley</a></strong> (the creator of TanStack Query) talk about where the library is going, and why. I also joined a hands-on workshop led by <strong><a href="https://github.com/tkdodo" target="_blank" rel="noopener noreferrer">TkDodo (Dominik Dorfmeister)</a></strong>, one of the main contributors.</p>
<p>And that combination did something dangerous to my brain: it made the future feel close. Suddenly, staying behind didn’t feel “safe” anymore, it felt like we were choosing to be stuck.</p>
<p>I came back with the kind of motivation that only conferences provide: equal parts inspiration and <em>“I can totally do this in a week.”</em></p>
<p><em><strong>Spoiler Alert: It didn&rsquo;t take a week</strong></em></p>
<h3 id="why-we-migrated-without-the-marketing-fluff">Why We Migrated (Without the Marketing Fluff)</h3>
<p>Even if you’ve never heard of TanStack Query (or even React) this part is relatable: some upgrades are worth doing because they make the system safer, clearer, and easier to change.</p>
<p>For us, v5 had a few big reasons to move:</p>
<ul>
<li>Our client is TypeScript-first, and v5 improves the overall type-safety story (fewer “trust me” moments).</li>
<li>It aligns better with modern React patterns (including improved suspense/error support in the ecosystem around it).</li>
<li>It encourages a more intuitive flow for keeping data in sync, especially around invalidations and organizing query keys.</li>
<li>It comes with new structure, improved consistency, and generally a more “current” mental model.</li>
</ul>
<p>And yes: no one wants to be left behind, especially on a library that sits at the core of how data flows through the app.</p>
<p><strong>That’s the practical version. The emotional version is simpler:</strong></p>
<blockquote>
<p>If this thing is part of our app’s foundation, it shouldn’t be fossilized.</p>
</blockquote>
<h3 id="what-actually-changes-in-an-upgrade-like-this-the-vibe-not-the-details">What Actually Changes in an Upgrade Like This (The Vibe, Not the Details)</h3>
<p>Here’s the trick with explaining upgrades: people don’t care about the exact API changes and they shouldn’t. What matters is the shape of the pain.</p>
<p>The v3 → v5 jump had a theme: more consistency, fewer “choose your own adventure” patterns.</p>
<p>In plain language, we moved from <em>“there are multiple ways to call this and they all kinda work”</em> to <em>“there’s one clear way to do it.”</em> In practice, that meant a lot of the code had to be rewritten into a more unified structure.</p>
<p>Some of the internal vocabulary changed too (for example, what used to be described as “loading” moved toward “pending”). Not exciting!? Sure, until you realize how many places in a mature product use those states for spinners, skeleton loaders, button disabling, and empty states.</p>
<p>So even small changes ripple into real UI behavior.</p>
<h3 id="the-elephant-in-the-room-wait-where-did-my-callbacks-go">The Elephant in the Room: “Wait… Where Did My Callbacks Go?”</h3>
<p>This was the moment I knew the migration was going to turn into a story.</p>
<p>If you’ve ever used a “data fetching” tool, you’ve probably used callbacks like:</p>
<ul>
<li>“when it succeeds, do this”</li>
<li>“when it fails, do that”</li>
<li>“when it finishes, clean up”</li>
</ul>
<p>In v5, query callbacks were deprecated. TanStack’s reasoning (paraphrased) is basically:</p>
<blockquote>
<p>They can create confusing side-effects; prefer using <code>useEffect</code> or dependent queries instead.</p>
</blockquote>
<p>Now, that’s a fair philosophy. But in a large existing codebase, it also means:</p>
<ul>
<li>You can’t just upgrade the package.</li>
<li>You have to decide what your new “standard way” is.</li>
<li>And you have to apply it everywhere fairly consistently.</li>
</ul>
<p>So we did what we’ve learned (and, frankly, what big codebases often need and <strong>should</strong> do!):</p>
<p><strong>We created facades</strong></p>
<p>Instead of rewriting the whole app to adopt the new callback philosophy directly in every file, we introduced our own wrappers (facades) for <code>useQuery</code> and <code>useInfiniteQuery</code>.</p>
<p>Under the hood, they follow the recommended approach (via useEffect and status/data checks), but from the developer’s point of view, they act as the familiar “house style” of our repo:</p>
<ul>
<li>Import our own <code>hook</code>, not directly from the library.</li>
<li>Keep behavior consistent.</li>
<li>Make future changes easier because we control the interface.</li>
</ul>
<p>It’s the same spirit as other abstractions we already have (like how we’ve wrapped other libs to add our own conventions).</p>
<p>This ended up being one of the most important parts of the migration, not because it’s “clever,” but because it reduces confusion and keeps the codebase cohesive.</p>
<h3 id="how-the-migration-took-life-aka-the-montage">How the Migration Took Life (aka: The Montage)</h3>
<p>I won’t pretend this was a calm, linear process. It was more like a series of increasingly honest conversations with reality.</p>
<p><strong>Step 1: Cursor was a big part of it</strong></p>
<p>I fed the agent documentation, asked it to handle the “mechanical” stuff (renames, repetitive edits, the new unified syntax), and it did help, especially early on.</p>
<p>And then we hit the classic wall: large codebase + many patterns + many edge cases = AI starts hallucinating and confidently breaks things.</p>
<p>So the vibe shifted from <em>“AI will do this for me”</em> to <em>“AI will do the tedious parts while I supervise like a tired detective.”</em></p>
<p><strong>Step 2: Linting as a compass</strong></p>
<p>At some point it became pure grind in the best sense: <code>npm run lint</code>, click errors one by one, guide the tool, fix things manually when needed, repeat.</p>
<p>Not glamorous, but effective. Like cleaning a kitchen by starting from the most visible mess.</p>
<p>After these first steps, and while I was working on this, in between my &ldquo;planned tasks&rdquo; and some of it on my free time, two weeks flew by.</p>
<p>The reality hit me then. It&rsquo;s gonna take <strong>a year</strong> if I move like this. So I did the only thing that can help me finish this. I escalated the issue and I bought myself some time off my &ldquo;regular work&rdquo;,
in order to see this through.</p>
<p><strong>Step 3: The facade fix</strong></p>
<p>Once the obvious deprecations were handled, we solved the “callbacks” problem properly by introducing the wrappers. That was a turning point: suddenly the rest of the migration felt possible.</p>
<p>At this point, I was nearly a month in but I could actually see the light at the end of the tunnel.</p>
<p><strong>Step 4: Running the client</strong></p>
<p>There’s a moment in every migration where you finally get to:</p>
<p>build,</p>
<p>load the app,</p>
<p>click around…</p>
<p>…and you think, “We did it.”</p>
<p><em><strong>That moment is a liar.</strong></em></p>
<p><strong>Step 5: Bring in QA (and humility)</strong></p>
<p>This is where the real work started. Compilers can’t catch logic misunderstandings. QA can!</p>
<p>I asked QA to run the full product suite and help expose anything that “felt off” after the changes.</p>
<p><strong>Step 6: Fix → Test → Repeat</strong></p>
<p>You fix what QA finds, QA tests again, you fix again… until the upgrade stops being “technically correct” and starts being “actually correct.”</p>
<p>After almost a week of doing this, I actually got the <em><strong>green light</strong></em> from everyone involved.</p>
<p><em><strong>One and a half months</strong></em> after all these started and finally dropping down to my normal BPM, the cherry on top was a big presentation I made to everyone in the front end chapter and shared with the whole company, about the changed stuff and all the new, cool things we can build.</p>
<p>This was my first time diving so deep into a professional repository and I will never forget the experience.</p>
<h3 id="the-takeaway-cleanliness-is-a-decision-we-make-repeatedly">The Takeaway: Cleanliness Is a Decision We Make Repeatedly</h3>
<p>Doing this alone was a struggle, but it taught me a ton about the library, about our codebase, and about migration strategy. And it also made something obvious:</p>
<p><em><strong>If we keep upgrades small and regular, they stay boring.</strong></em><br>
<em><strong>If we postpone them long enough, they become heroic.</strong></em></p>
<p>Keeping the client modern is an ongoing struggle—but it’s also one of the most important forms of care we can give the product. Not because “new is shiny,” but because clean foundations make everything else easier.</p>
<p>So here’s my small, slightly dramatic plea:</p>
<blockquote>
<p>Let’s not let garbage become architecture.<br>
Let’s keep things up to date, consistently, incrementally, and without fear.</p>
</blockquote>
]]></content:encoded></item><item><title>Boundaries Against the Machine</title><link>https://blog.talentlms.io/posts/boundaries-against-the-machine/</link><pubDate>Mon, 08 Dec 2025 00:00:00 +0000</pubDate><dc:creator>Yannis Rizos</dc:creator><guid>https://blog.talentlms.io/posts/boundaries-against-the-machine/</guid><description>Five years ago, we invested in Domain-Driven Design. Conferences, workshops, consultants. The works.
The goal was simple: help humans navigate complexity. Make domain experts and developers speak the same language.
We had no idea those same boundaries would matter for something else entirely.</description><enclosure url="https://blog.talentlms.io/images/posts/boundaries-against-the-machine.png" type="image/png"/><media:content url="https://blog.talentlms.io/images/posts/boundaries-against-the-machine.png" medium="image"><media:title type="plain">Two abstract painted figures seated at a desk studying multiple maps with boundaries and routes on a warm background</media:title><media:description type="plain">Two abstract painted figures seated at a desk studying multiple maps with boundaries and routes on a warm background</media:description></media:content><content:encoded><![CDATA[<img src="https://blog.talentlms.io/images/posts/boundaries-against-the-machine.png" alt="Two abstract painted figures seated at a desk studying multiple maps with boundaries and routes on a warm background" style="max-width: 100%; height: auto; margin-bottom: 1.5em;" /><p style="margin-bottom: 1.5em; padding: 1em; background-color: #f5f5f5; border-left: 4px solid #0066cc;"><strong>Yannis Rizos</strong>, Chief Software Architect<br/>Yannis discovered programming at age 7. Soon after, he encountered Larry Wall's three virtues of laziness, impatience, and hubris, principles that have guided his approach to software development ever …</p><p><strong>DDD Europe 2020, Amsterdam.</strong> For a COVID-era hire like me, this was the first time meeting several colleagues in person whom I had been working with daily for months. I kept noticing how familiar everyone sounded and how unfamiliar they looked. An odd sense of delayed recognition. We had paired on code over Google Meet, argued about bounded contexts in Slack, but never shared the same room. The talks, the workshops, the hallway conversations with practitioners who had been doing this for years. We absorbed everything we could.</p>
<p>This was not a one-off. Over the next five-plus years, Epignosis doubled down on more conferences, internal training programs, external consultants, workshops, books, the works. The investment was significant. The business case was clear. Align our code with our business. Make the codebase maintainable as we scale. Enable domain experts and developers to speak the same language.</p>
<p>What we did not anticipate five years ago, in that Amsterdam conference hall, was that we were also preparing our codebase for a future where AI would help us write it.</p>
<h2 id="what-we-built">What We Built</h2>
<p>The first fruits came when we built the new TalentLMS API. This is where Domain-Driven Design stopped being conference theory and became operational practice. We established Ubiquitous Language with domain experts. I remember the shift when conversations with product became faster because we had finally stopped translating ideas back and forth. No more translation layer between what product teams said and what engineers built.</p>
<p>We introduced Value Objects to replace primitives. A <code>CourseStatus</code> is a <code>CourseStatus</code> with its own validation and behavior. Some engineers hesitated at first because what we were now calling <a href="https://refactoring.guru/smells/primitive-obsession" target="_blank" rel="noopener noreferrer"><em>primitive obsession</em></a> had been the norm for years. That hesitation faded once the constraints started catching real issues. No more passing around strings and hoping they were the right kind of string.</p>
<p>The Anti-Corruption Layer became the cornerstone of our refactoring strategy. We were building new code alongside what was, at the time, a 12-year-old system. We could not afford to let old assumptions bleed into the new model. I could feel the team relax once they realized the new model would stay protected from old assumptions. The ACL created a boundary, a translation layer that let us move forward without being dragged backward by legacy constraints.</p>
<p>Legacy systems demand clarity at every layer. When domain experts and engineers speak different languages, features get built wrong. When boundaries blur, technical debt compounds. At our scale, DDD was not optional. It was operational necessity.</p>
<p>After the API project, we restructured the codebase around domain concepts. We moved from a technical organization to a domain one. Now you do not just know where the models, the views, and the controllers are. You know where the <em>Learning Paths</em> are. Where <em>Talent Library</em> lives. Where to look for <em>Reports</em>.</p>
<h2 id="how-ai-reads-it">How AI Reads It</h2>
<p>Today, when I ask AI to add a feature to <em>Notifications</em>, it enters the module as if it already understands the territory. It reads the surrounding files, forms a picture of the local concepts, and uses those patterns to shape its first draft. The result is not perfect, but the model moves through the module with a level of confidence that only appeared once the structure became consistent.</p>
<p>AI tools work with limited context. Your current file plus nearby files. This turns domain-based organization from nice to have into critical. When AI is working in the Notifications module, the files it can see are notification concepts, not random controllers. Proximity becomes a semantic relationship instead of accidental collocation.</p>
<p>AI consumes our DDD structure through multiple channels. File and directory names reflect domain concepts. When AI scans the Notifications module, it sees how we handle trigger conditions, execution schedules, and result tracking. Architectural Decision Records document why certain boundaries exist and what alternatives we considered.</p>
<p>Value Object type signatures make constraints explicit in method signatures. When AI sees a function that takes <code>NotificationsTitle</code> instead of a string, it recognizes a constraint. The Ubiquitous Language we established means the model encounters terms that carry domain meaning, not generic technical jargon.</p>
<p>The Anti-Corruption Layer shows AI where boundaries are. It will not couple new feature code directly to legacy database schemas. The ACL is a boundary, a wall you cannot walk through without noticing.</p>
<p>The first time AI added code that matched the existing module structure, it felt like the model had finally learned the shape of our system. That was the moment when the link between our boundaries and the model&rsquo;s output stopped being theoretical and became visible in daily work.</p>
<h2 id="was-it-worth-it">Was It Worth It?</h2>
<p>DDD is not free. Introducing Value Objects means wrapping primitives. Defining aggregates means thinking hard about consistency boundaries. Building an Anti-Corruption Layer means accepting the overhead of translation between old and new.</p>
<p>In a greenfield project, you can build with DDD from day one. In a legacy codebase, you are retrofitting. You are making incremental changes while keeping the system running. Every change needs careful migration. Every boundary you introduce might break something. Restructuring a codebase around domains when you have more than a decade of technical organization is months of work.</p>
<p>Maximalists argue we should wait for better models. Models are improving fast. By the time you have spent six months on DDD, maybe AI will not need that structure anymore. Maybe it will figure things out. These are not unreasonable positions.</p>
<p>But TalentLMS is not a prototype; it is not something you build at a <a href="https://www.starttech.vc/blog/2025/from-ancient-theater-to-modern-hackathlon/" target="_blank" rel="noopener noreferrer">3-day hackathon</a>. It serves more than 20 million users. That means edge cases accumulated over more than a decade that are not in any training data. Business logic that reflects real-world complexity, not textbook examples. Performance optimizations that look odd but exist because a specific query pattern was crushing the database back in 2014. Regulatory requirements across different countries. Integrations with dozens of third-party tools, each with its own quirks. Data migrations that took months to plan and execute.</p>
<p>AI cannot hold this in its context window. Even the largest models. You cannot fit years of accumulated decisions, trade-offs, and reasons for odd behavior into a prompt. Seeing AI miss a detail that every senior engineer at TalentLMS knew by heart reminded me how much of our system lives outside documentation. Kent Beck frames it clearly in <a href="https://tidyfirst.substack.com/p/programming-deflation" target="_blank" rel="noopener noreferrer">Programming Deflation</a>:</p>
<blockquote>
<p>In a world of abundant cheap code, what becomes scarce? Understanding. Judgment. The ability to see how pieces fit together. The wisdom to know what not to build.</p>
</blockquote>
<h2 id="what-we-are-seeing">What We Are Seeing</h2>
<p>Features that would have taken days now take hours. AI-generated code fits our architecture more consistently. I cannot tell whether the improvement comes from our structure, better prompts, or rapid model progress. I only know the change is visible in daily work.</p>
<p>Structure that helps humans navigate complexity seems to help machines navigate it too. Whether that is causal or correlation, we will know in the near future. For now, we are paying close attention.</p>
<p>But the shift is real. When AI writes the boilerplate, what remains are the decisions that matter. Where boundaries go. What invariants hold the system together. How capabilities compose. Which trade-offs we are willing to accept and why.</p>
<p>In the good old days, we spent mental energy on low-level questions. How to implement a validation. How to write a specific query. With AI, that energy can be reserved for higher-level reasoning. What invariants an aggregate must protect. Where a capability belongs in the architecture. Same mental load, different altitude. More time on structure. Less on mechanics.</p>
<p>At 100 million requests per hour, architectural decisions compound. A poor boundary creates operational problems. A missing invariant risks data integrity. AI can help you move faster, but only if your architecture can guide it. Without that, AI only helps you make mistakes faster.</p>
<h2 id="five-years-later">Five Years Later</h2>
<p>Standing in that Amsterdam conference hall, we were learning DDD to build better software. The investment was about aligning code with business language, about creating boundaries that made sense, about sustainable complexity management.</p>
<p>Today, that same investment pays dividends we never anticipated. Our domain modules. Our explicit boundaries. Our Ubiquitous Language. The Anti-Corruption Layer that keeps old assumptions from bleeding into new code. None of it was built with AI in mind, yet all of it matters for AI effectiveness.</p>
<p>I look back at that conference trip now with a sense of quiet irony because none of us imagined what those early choices would enable. Those workshops and late-night debates about aggregate boundaries were not only about maintainability. They were shaping the maps that modern development tools now rely on.</p>
<p>Not a bad return on investment for a conference trip.</p>
]]></content:encoded></item><item><title>Engineering Hiring: Our Journey Through Constant Change</title><link>https://blog.talentlms.io/posts/engineering-hiring-journey-through-constant-change/</link><pubDate>Mon, 01 Dec 2025 00:00:00 +0000</pubDate><dc:creator>Konstantinos Chatzinikolakis</dc:creator><guid>https://blog.talentlms.io/posts/engineering-hiring-journey-through-constant-change/</guid><description>Three-plus years of continuous hiring evolution at TalentLMS, from traditional assignments to live coding experiments, back to take-homes, and through AI disruption. A candid look at our journey of constant experimentation and what we learned about finding the right engineering talent through changing times.</description><enclosure url="https://blog.talentlms.io/images/posts/engineering-hiring-journey-through-constant-change.png" type="image/png"/><media:content url="https://blog.talentlms.io/images/posts/engineering-hiring-journey-through-constant-change.png" medium="image"><media:title type="plain">Abstract painted forms showing evolution and transformation, representing the continuous journey of hiring process experimentation</media:title><media:description type="plain">Abstract painted forms showing evolution and transformation, representing the continuous journey of hiring process experimentation</media:description></media:content><content:encoded><![CDATA[<img src="https://blog.talentlms.io/images/posts/engineering-hiring-journey-through-constant-change.png" alt="Abstract painted forms showing evolution and transformation, representing the continuous journey of hiring process experimentation" style="max-width: 100%; height: auto; margin-bottom: 1.5em;" /><p style="margin-bottom: 1.5em; padding: 1em; background-color: #f5f5f5; border-left: 4px solid #0066cc;"><strong>Konstantinos Chatzinikolakis</strong>, TalentLMS Enterprise Engineering Director<br/>Kostas joined Epignosis back in 2019 and somehow ended up managing around 30 engineers across the company's e-learning platforms. He's fascinated by the human side of building software and believes …</p><h1 id="the-evolution-never-stops-three-plus-years-of-hiring-adventures-in-engineering">The Evolution Never Stops: Three-Plus Years of Hiring Adventures in Engineering</h1>
<blockquote>
  <p>It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change.</p><cite class="author">Charles Darwin</cite></blockquote>


<p>Darwin probably wasn&rsquo;t thinking about tech hiring when he wrote this, though I first encountered this quote in Kent Beck&rsquo;s &ldquo;Extreme Programming Explained.&rdquo; But here we are. Over the past three-plus years, our engineering hiring process has been on a wild ride of transformations. Not because some consultant told us to, not because we read it in a blog post, but because&hellip; well, things kept changing and we kept trying stuff. This is that story.</p>
<h2 id="where-we-started-the-traditional-days">Where We Started: The Traditional Days</h2>
<p>Three-plus years ago, when I took over hiring responsibilities, I inherited what seemed like a solid, time-tested process. Private portal, comprehensive assignments, PHP specs, EER diagrams: the works. This had been the way things were done for years (I&rsquo;d gone through it myself when I was hired back in 2019). Candidates would disappear for 3-5 days and emerge with their solutions.</p>
<p>It mostly worked. We could spot who understood patterns, who thought about security, and who could document their thinking. But my favorite discovery? README files became this unexpected window into people&rsquo;s minds. The way someone explains how to set up their project, how they anticipate confusion, how they communicate with a future stranger - pure gold.</p>
<h2 id="the-time-we-tried-to-be-nice-live-coding-adventures">The Time We Tried to Be Nice: Live Coding Adventures</h2>
<p>At some point, we had this collective moment of &ldquo;Wait, are we asking too much?&rdquo; Five days of someone&rsquo;s life for maybe getting a job? That felt&hellip; heavy.</p>
<p>So we pivoted. Live coding! Respectful of time! Only 1-2 hours! What could go wrong?</p>
<p>For backend folks, we cooked up these Slim framework challenges. 30-40 minutes, boom, done. People seemed to appreciate it, and we could watch them think in real-time. Pretty cool.</p>
<p>Frontend was&hellip; a different story. For an entire year (I kid you not, a YEAR) we couldn&rsquo;t find Vue.js developers who could handle basic tasks. &ldquo;Fetch data from an API and put it in a table.&rdquo; That was it. And yet, when we did find people who breezed through? They became absolute rockstars on our team.</p>
<h2 id="the-pendulum-swings-back-take-home-redux">The Pendulum Swings Back: Take-Home Redux</h2>
<p>Then my direct reports wanted to take ownership of the hiring process. Fresh eyes, fresh ideas. Their theory? Maybe live coding was too stressful. Maybe we were losing gems to performance anxiety.</p>
<p>Back to take-home we went. For the frontend, we switched to Vue 3 projects with public APIs. Backend reverted to our original portal assignments. And wow, suddenly everyone was passing! Success rates through the roof! We were geniuses!</p>
<p>&hellip;Until these same stellar assignment-completers joined the team and proved less capable than those who&rsquo;d excelled in our live coding sessions.</p>
<p>This got us thinking. The people who breezed through live coding consistently outperformed those who aced take-home assignments but struggled with real-time collaboration. Maybe we were onto something with that live format after all.</p>
<h2 id="then-ai-showed-up-and-flipped-the-table">Then AI Showed Up and Flipped the Table</h2>
<p>Just when we thought we had it figured out (narrator: they didn&rsquo;t), ChatGPT and friends crashed the party. Our carefully crafted assignments? Obsolete overnight. What took juniors days now took anyone with decent prompting skills about 20 minutes.</p>
<p>The generational shift was fascinating to watch. Early on, candidates would get this deer-in-headlights look when we mentioned they&rsquo;d clearly used AI. Like kids caught with their hand in the cookie jar. Fast forward a few months, and NOT using AI was the weird choice. It became as natural as breathing.</p>
<p>Suddenly, we&rsquo;re asking different questions. If AI can write the code, what&rsquo;s left? Turns out - everything that makes us human. How you think, how you communicate, how you collaborate, whether you can navigate the beautiful mess of ambiguity that is real-world software development.</p>
<h2 id="where-we-landed-for-now">Where We Landed (For Now)</h2>
<p>After all this experimentation, we&rsquo;ve learned some things. Or at least, we think we have:</p>
<p>The best code isn&rsquo;t always the cleverest code - it&rsquo;s the code your teammate can understand at 3 AM during an incident.</p>
<p>Seniority used to mean &ldquo;can implement any spec flawlessly.&rdquo; Now it means &ldquo;can figure out what we should build when nobody&rsquo;s quite sure, and help the team get there together.&rdquo;</p>
<p>AI changed the game, but it didn&rsquo;t replace the players. The engineers thriving now are the ones who see AI as another tool in the toolbox, not a magic wand.</p>
<p>We&rsquo;ve gotten great feedback about our process from candidates, which makes us feel pretty good. Then we go through other companies&rsquo; interview processes and&hellip; yikes. Humbling. If we had it all figured out, we wouldn&rsquo;t still be changing things.</p>
<h2 id="the-never-ending-story">The Never-Ending Story</h2>
<p>Will we change our approach again? Actually, we already are. Vassilis Poursalidis, our Engineering Director, is spearheading a new unified approach that goes live-by-default for everyone—frontend, backend, QA, the whole crew. Because here&rsquo;s the thing about working in tech: the moment you think you&rsquo;ve got it figured out, the ground shifts. The process that works today might be laughable tomorrow. And that&rsquo;s actually&hellip; kind of exciting?</p>
<p>We&rsquo;re not adapting because we have to survive. We&rsquo;re adapting because it&rsquo;s interesting, because we&rsquo;re curious, because every failure teaches us something new. Even when those failures are spectacular.</p>
<p>Our hiring journey has been messy, nonlinear, sometimes frustrating, often surprising. But it&rsquo;s been ours. Every weird experiment, every &ldquo;what if we tried&hellip;&rdquo;, every pendulum swing - it&rsquo;s all been part of figuring out how to find people we want to work with.</p>
<p>In technology, evolution isn&rsquo;t optional; it&rsquo;s the path to excellence.</p>
<hr>
<p>At TalentLMS, we&rsquo;re always trying new things and learning as we go. If that sounds like your kind of environment, <a href="https://www.epignosishq.com/careers/" target="_blank" rel="noopener noreferrer">come join the experiment</a>.</p>
]]></content:encoded></item><item><title>Inglorious Testing: Does AI Mitigate QA Curiosity?</title><link>https://blog.talentlms.io/posts/does-ai-mitigate-qa-curiosity/</link><pubDate>Mon, 24 Nov 2025 00:00:00 +0000</pubDate><dc:creator>Ioannis Psaronikolakis</dc:creator><guid>https://blog.talentlms.io/posts/does-ai-mitigate-qa-curiosity/</guid><description>As AI tools revolutionize software testing, QA professionals fear losing the curiosity and critical thinking that define their craft.
But rather than diminishing these essential traits, AI actually amplifies them by shifting the QA role from test execution to quality architecture.
This exploration examines how AI serves as a high-performing partner that handles repetitive tasks, creating space for deeper analysis of user experience, business context, and meaningful test coverage—ultimately strengthening rather than weakening the human elements that make QA invaluable.</description><enclosure url="https://blog.talentlms.io/images/posts/does-ai-mitigate-qa-curiosity.jpg" type="image/png"/><media:content url="https://blog.talentlms.io/images/posts/does-ai-mitigate-qa-curiosity.jpg" medium="image"><media:title type="plain">Questioning AI</media:title><media:description type="plain">Questioning AI</media:description></media:content><content:encoded><![CDATA[<img src="https://blog.talentlms.io/images/posts/does-ai-mitigate-qa-curiosity.jpg" alt="Questioning AI" style="max-width: 100%; height: auto; margin-bottom: 1.5em;" /><p style="margin-bottom: 1.5em; padding: 1em; background-color: #f5f5f5; border-left: 4px solid #0066cc;"><strong>Ioannis Psaronikolakis</strong>, Lead QA Engineer<br/>Ioannis is a Lead QA Engineer with nearly a decade of experience in software quality assurance. He is deeply passionate about quality engineering and continuous improvement.
</p><p>The introduction of AI into software testing raised a common concern among QA professionals. Not that AI will replace testers, but that it might reduce the traits that give the role its value: curiosity, critical thinking, and attention to detail.</p>
<p>If AI can write tests, analyze logs, and generate large amounts of data quickly, what happens to the craft behind quality assurance?</p>
<p>Relying on AI without review risks weakening the skills that define strong QA work.</p>
<p>In a world where a prompt can create a test case faster than we can read it, the concern felt valid.</p>
<p>But as AI becomes part of daily workflows, another point becomes clearer:</p>
<p><strong>The risk isn&rsquo;t AI.</strong></p>
<p><strong>The risk is overlooking our core QA values.</strong></p>
<h2 id="the-fear-ai-will-make-us-less-critical">The Fear: &ldquo;AI Will Make Us Less Critical&rdquo;</h2>
<p>This concern appears in simple ways.</p>
<p>AI produces a set of tests, and the review becomes faster, sometimes too fast.</p>
<p>Outputs that would normally raise questions may pass unnoticed because they look structured or complete.</p>
<p>Speed can create the impression of quality even when depth is missing.</p>
<p>These situations show a pattern:</p>
<p><strong>AI can give a sense of productivity while lowering the level of scrutiny – if allowed to.</strong></p>
<h2 id="the-shift-ai-is-a-high-delivering-partner">The Shift: AI Is a High-Delivering Partner</h2>
<p>Here&rsquo;s the shift that changed the narrative:</p>
<p>AI can act like an architect.</p>
<p>It can propose designs, suggest flows, and structure ideas faster than any of us can type.</p>
<p>It can even behave like a quality engineer – pointing out risks, generating scenarios, evaluating consistency.</p>
<p>But AI still lacks something fundamental to QA work:</p>
<ul>
<li>It cannot interpret experience</li>
<li>It cannot sense friction in a user journey</li>
<li>It cannot experience frustration, confusion, or surprise</li>
<li>It cannot detect when something is technically correct but still wrong for the user</li>
</ul>
<p><strong>That human layer of perception – the intuition built from experience – is irreplaceable.</strong></p>
<p>So while AI can behave like a highly productive partner, it still needs our:</p>
<ul>
<li>guidance</li>
<li>direction</li>
<li>prioritization</li>
<li>correction</li>
<li>and, above all, assessment</li>
</ul>
<p><strong>AI can generate a hundred tests in a minute.</strong></p>
<p><strong>But it cannot tell when something feels wrong.</strong></p>
<h2 id="the-thin-line-when-tests-lose-meaning">The Thin Line: When Tests Lose Meaning</h2>
<p><strong>Quantity is not value – and in QA, this distinction is everything.</strong></p>
<p>Without proper review, AI can produce:</p>
<ul>
<li>verbose test suites with no business impact</li>
<li>redundant cases that inflate execution time</li>
<li>noise disguised as coverage</li>
<li>flakiness hidden behind sophistication</li>
<li>blind spots in core flows that genuinely matter</li>
</ul>
<p>It is common to see AI generate many versions of a small validation and overlook a major product path.</p>
<p>This is the thin line we walk:</p>
<p><strong>AI can accelerate useful coverage or accelerate unnecessary complexity.</strong></p>
<p><strong>The deciding factor is our critical thinking.</strong></p>
<h2 id="the-realization-ai-gives-space-to-quality">The Realization: AI Gives Space to Quality</h2>
<p>When the expectation that AI will replace thinking is removed, its benefit becomes clearer:</p>
<p>With AI handling the repetitive, mechanical tasks – the boilerplate test structures, the obvious edge cases, the initial draft work – we finally gain more time to think deeply: to explore behaviors, question assumptions, dig into risk, business context, user reality, and quality architecture.</p>
<p><strong>AI doesn&rsquo;t reduce quality thinking.</strong></p>
<p><strong>It creates more space for it.</strong></p>
<h2 id="the-new-qa-era-from-test-writer-to-quality-architect">The New QA Era: From Test Writer to Quality Architect</h2>
<p>This shift becomes clearer when looking at how the role has evolved:</p>
<ul>
<li>AI writes faster – we need to think deeper</li>
<li>AI produces more – we need to evaluate better</li>
<li>AI expands quantity – we protect quality</li>
</ul>
<p>The role shifts towards:</p>
<ul>
<li>deciding what is worth testing</li>
<li>defining the quality bar</li>
<li>shaping automation strategy</li>
<li>ensuring business alignment</li>
<li>analyzing patterns and risks</li>
<li>preventing meaningless test bloat</li>
<li>maintaining clarity, purpose, and intent</li>
</ul>
<p>These are not tasks AI can replace.</p>
<p><strong>They rely on experience, reasoning, and context.</strong></p>
<p>AI increases execution capacity, but it does not define meaning.</p>
<p>Only QA engineers can determine which scenarios matter, how coverage aligns with user behavior, and where testing supports real product outcomes.</p>
<p><strong>This is the modern QA identity:</strong></p>
<p>moving from executor to quality architect,</p>
<p>from generating tests to designing purpose,</p>
<p>from checking output to governing meaning.</p>
<p><strong>AI strengthens this identity by giving us more space to focus on it.</strong></p>
<h2 id="so-does-ai-mitigate-qa-curiosity">So, Does AI Mitigate QA Curiosity?</h2>
<p><strong>No.</strong></p>
<p>AI introduces a decision point:</p>
<p>Either accept its output blindly and let critical skills fade – or use it intentionally and expand the scope of quality thinking.</p>
<p>Curiosity becomes more important, not less.</p>
<p>Critical thinking becomes mandatory, not optional.</p>
<p>An eye for detail becomes even more essential.</p>
<p>These traits have always defined the QA.</p>
<p>Now we need an extra eye – one that examines AI&rsquo;s work, protects meaning, and keeps quality aligned with real business value.</p>
<p><strong>In the end, AI does not change what makes QA valuable.</strong></p>
<p><strong>It highlights why those qualities continue to matter.</strong></p>
]]></content:encoded></item><item><title>APIs as Infrastructure: Optimizing for Change</title><link>https://blog.talentlms.io/posts/apis-as-infrastructure/</link><pubDate>Mon, 17 Nov 2025 00:00:00 +0000</pubDate><dc:creator>Aggelos Bellos</dc:creator><guid>https://blog.talentlms.io/posts/apis-as-infrastructure/</guid><description>APIs need to stay stable even as the systems behind them keep changing. Each version becomes a contract that cannot move, while data models and requirements evolve around it.
Keeping everything working without slowing development is harder than it seems, and common approaches often break down once incremental changes start to accumulate.</description><enclosure url="https://blog.talentlms.io/images/posts/apis-as-infrastructure.png" type="image/png"/><media:content url="https://blog.talentlms.io/images/posts/apis-as-infrastructure.png" medium="image"><media:title type="plain">Horizontal layered strata progressing diagonally from smaller to larger forms with backward transformation arrows showing API version evolution</media:title><media:description type="plain">Horizontal layered strata progressing diagonally from smaller to larger forms with backward transformation arrows showing API version evolution</media:description></media:content><content:encoded><![CDATA[<img src="https://blog.talentlms.io/images/posts/apis-as-infrastructure.png" alt="Horizontal layered strata progressing diagonally from smaller to larger forms with backward transformation arrows showing API version evolution" style="max-width: 100%; height: auto; margin-bottom: 1.5em;" /><p style="margin-bottom: 1.5em; padding: 1em; background-color: #f5f5f5; border-left: 4px solid #0066cc;"><strong>Aggelos Bellos</strong>, Senior Software Engineer<br/>Aggelos decided to become a software engineer at a young age. At 12, he started experimenting with web technologies through a Blogspot.

He is currently working in the Architecture Team on TalentLMS, …</p><p>Managing APIs is hard. An application usually supports a single version of itself. It can be refactored, restructured, and redesigned with relative freedom. On the other hand, an API has to maintain stability for all its versions that are being consumed.</p>
<p>As requirements change, more and more time is spent on how to avoid breaking changes instead of actually delivering value.</p>
<h2 id="frozen-in-time">Frozen in Time</h2>
<p>An API is a contract between the provider and the consumer. This means that once a version is released, it should remain frozen in time.
This is not only true for the contract of the API but also for its implementation. However, this is not a &ldquo;should&rdquo; but what happens in practice.</p>
<p>When an API is released, we don&rsquo;t care about its internal implementation anymore. Yes, there will be bugs and yes, there are multiple shared components.
But other than these, we do not make changes. If it works, don&rsquo;t change it. Right?</p>
<p>Theoretically, we could develop a completely new application for each new version. This would allow us to build something and just make sure we keep it alive. Practically, the cost of maintaining such an approach is prohibitive. Furthermore, this is not how software is built in practice.</p>
<p>Software is built in small incremental batches. So why don&rsquo;t we optimize our APIs for small incremental changes?</p>
<h2 id="problem">Problem</h2>
<p>Code frozen in time sounds good, but what about code that reflects data? Data is also evolving with each new requirement.
Take this for example:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-php" data-lang="php"><span style="display:flex;"><span> <span style="color:#66d9ef">if</span> ($course<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">active</span>) {
</span></span><span style="display:flex;"><span>    <span style="color:#75715e">// do something
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span> } <span style="color:#66d9ef">else</span> {
</span></span><span style="display:flex;"><span>    <span style="color:#75715e">// do something else
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span> }
</span></span></code></pre></div><p>If we delete the <code>active</code> field in favor of a new <code>status</code> field, then we will have code that depends on a field that does not exist anymore.
Even a new application wouldn&rsquo;t help us here as there is still the dependency on the data.</p>
<p>There are ways to mitigate this problem. The easiest one is to check the version of the API and adapt the code accordingly.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-php" data-lang="php"><span style="display:flex;"><span> <span style="color:#66d9ef">if</span> ($apiVersion <span style="color:#f92672">&gt;=</span> <span style="color:#ae81ff">2</span>) {
</span></span><span style="display:flex;"><span>     <span style="color:#66d9ef">if</span> ($course<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">status</span> <span style="color:#f92672">===</span> <span style="color:#e6db74">&#39;active&#39;</span>) {
</span></span><span style="display:flex;"><span>         <span style="color:#75715e">// do something
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>     } <span style="color:#66d9ef">else</span> {
</span></span><span style="display:flex;"><span>         <span style="color:#75715e">// do something else
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>     }
</span></span><span style="display:flex;"><span> } <span style="color:#66d9ef">else</span> {
</span></span><span style="display:flex;"><span>     <span style="color:#66d9ef">if</span> ($course<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">active</span>) {
</span></span><span style="display:flex;"><span>         <span style="color:#75715e">// do something
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>     } <span style="color:#66d9ef">else</span> {
</span></span><span style="display:flex;"><span>         <span style="color:#75715e">// do something else
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>     }
</span></span><span style="display:flex;"><span> }
</span></span></code></pre></div><p>As you can see, this approach quickly becomes unmanageable. Each new version adds more complexity to the code.</p>
<p>A more common approach is to use feature flags or to split each version in a different folder / class:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-php" data-lang="php"><span style="display:flex;"><span>  <span style="color:#f92672">-</span> <span style="color:#a6e22e">api</span>
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">-</span> <span style="color:#a6e22e">v1</span>
</span></span><span style="display:flex;"><span>          <span style="color:#f92672">-</span> <span style="color:#a6e22e">CourseController</span><span style="color:#f92672">.</span><span style="color:#a6e22e">php</span>
</span></span><span style="display:flex;"><span>      <span style="color:#f92672">-</span> <span style="color:#a6e22e">v2</span>
</span></span><span style="display:flex;"><span>          <span style="color:#f92672">-</span> <span style="color:#a6e22e">CourseController</span><span style="color:#f92672">.</span><span style="color:#a6e22e">php</span>
</span></span></code></pre></div><p>While this can work, it does not scale well with small incremental changes. Furthermore, you tend to lose what is the latest state of the application.</p>
<h2 id="apis-as-infrastructure">APIs as Infrastructure</h2>
<p>To solve these problems, we decided to treat APIs as infrastructure. An approach first introduced by <a href="https://stripe.com/blog/api-versioning" target="_blank" rel="noopener noreferrer">Stripe</a> back in 2017.</p>
<p>The idea is simple. Your code always reflects the latest version of your API. Each time you need to introduce a change, you update your code to reflect the new requirements. Then, you add a <code>VersionChange</code> that lets you go back in time.</p>
<p>Instead of branching the system into multiple versions, we move the system forward and let transformations pull older versions backward. This keeps change concentrated in one place instead of fragmented across versions.</p>
<p>Let us build on our previous example. We have a <code>CourseEntity</code> that had an <code>active</code> field in <code>V1</code> and in <code>V2</code> introduced a <code>status</code> field.
We would update our code to reflect the latest version, which means that our code would use only the <code>status</code> field.</p>
<p>To make sure we won&rsquo;t break the previous versions we add a <code>VersionChange</code> that restores the <code>active</code> field from the <code>status</code> field.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-php" data-lang="php"><span style="display:flex;"><span> <span style="color:#66d9ef">class</span> <span style="color:#a6e22e">ConvertCourseStatusToActiveChange</span> {
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">private</span> <span style="color:#a6e22e">string</span> $description <span style="color:#f92672">=</span> <span style="color:#e6db74">&#39;The active field was replaced by the status field for ...&#39;</span>;
</span></span><span style="display:flex;"><span> 
</span></span><span style="display:flex;"><span>    <span style="color:#66d9ef">public</span> <span style="color:#66d9ef">function</span> <span style="color:#a6e22e">apply</span>(<span style="color:#a6e22e">CourseEntity</span> $course)<span style="color:#f92672">:</span> <span style="color:#a6e22e">CourseEntity</span> {
</span></span><span style="display:flex;"><span>        $course<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">active</span> <span style="color:#f92672">=</span> $course<span style="color:#f92672">-&gt;</span><span style="color:#a6e22e">status</span> <span style="color:#f92672">===</span> <span style="color:#e6db74">&#39;active&#39;</span>;
</span></span><span style="display:flex;"><span>        <span style="color:#66d9ef">return</span> $course;
</span></span><span style="display:flex;"><span>    }
</span></span><span style="display:flex;"><span> }
</span></span></code></pre></div><p>When a request comes in, we check the requested version and apply all the necessary changes to bring the data to the requested version.</p>
<div class="mermaid">
  
sequenceDiagram
participant Request
participant API
participant VersionChanges
participant Response

Request->>API: Version: 2025-11-14
API->>API: Latest Version: 2025-11-17

loop Apply changes until we reach requested version
API->>VersionChanges: Apply 2025-11-16 changes
API->>VersionChanges: Apply 2025-11-15 changes
API->>VersionChanges: Apply 2025-11-14 changes
end

VersionChanges->>Response: Response from version 2025-11-14

</div>

<h3 id="heres-why-we-like-this-approach">Here&rsquo;s why we like this approach:</h3>
<ul>
<li>The code always reflects the latest version of the API.</li>
<li>Small incremental changes are easy to implement.</li>
<li>Each version change has a mandatory description that explains why the change was necessary.</li>
<li>We can freeze old versions without duplicating code.</li>
</ul>
<p>Treating APIs as infrastructure lets us evolve safely, incrementally, and without fear of breaking the past.</p>
<h2 id="keeping-versions-aligned-with-reality">Keeping Versions Aligned With Reality</h2>
<p>Most API versioning schemes assume that products evolve through major releases. Versions like <code>example.com/api/v1/courses</code> and <code>example.com/api/v2/courses</code> work well when changes arrive in large batches.</p>
<p>The problem is that major releases require coordination across departments and strict lifecycle planning. More importantly, they contradict everything we have said so far:
<code>$small_incremental_changes !== $major_release</code>.</p>
<p>Small, steady changes are easier for consumers to adopt. Ideally, the versioning scheme should reflect that and communicate something meaningful to them.</p>
<p>Date-based versioning ( YYYY-MM-DD ) does exactly that. Each version corresponds to a real point in time, making the incremental nature of our changes visible and predictable. It aligns the version history with how the API actually evolves, instead of forcing artificial release boundaries.</p>
<h2 id="design-first-change-maybe">Design First, Change Maybe</h2>
<p>Code will always remain a technical debt. Having a framework that supports change does not mean that we can avoid thinking about design. Some changes will always ripple through the system. We try to balance between over-engineering and pragmatism.</p>
<p>What matters is creating an environment where change is expected, guided, and safe. A structure that lets us introduce new behavior incrementally, without rewriting the past. An approach where old versions can be frozen with confidence, and new versions can evolve without fear.</p>
<p>By treating APIs as long-lived infrastructure rather than short-lived features, we make this balance possible. We keep the codebase aligned with the current truth of the system, we document why each version exists, and we ensure that past behavior stays accessible without forcing duplication or hacks.</p>
<h2 id="bonus">Bonus</h2>
<p>While we were experimenting with this approach, we found an open-source project that implements in FastAPI what Stripe describes in their blog post.
Having a concrete implementation really helped us to implement this approach in PHP.</p>
<p>You can check it out here: <a href="https://github.com/zmievsa/cadwyn" target="_blank" rel="noopener noreferrer">cadwyn</a>.</p>
]]></content:encoded></item><item><title>Super Secret Project That Probably Won't Happen</title><link>https://blog.talentlms.io/posts/super-secret-project-that-probably-wont-happen/</link><pubDate>Mon, 10 Nov 2025 00:00:00 +0000</pubDate><dc:creator>Yannis Rizos</dc:creator><guid>https://blog.talentlms.io/posts/super-secret-project-that-probably-wont-happen/</guid><description>It began as a small experiment to bring engineers back into contact with each other. Junior developers paired with mentors from different teams, learning how the company really worked instead of just their own corner of it.
No formal training, just regular conversations that built trust, context, and confidence. Over time, those early pairs shaped a quiet tradition.
Mentees became mentors. The bridges stayed.</description><enclosure url="https://blog.talentlms.io/images/posts/super-secret-project-that-probably-wont-happen.png" type="image/png"/><media:content url="https://blog.talentlms.io/images/posts/super-secret-project-that-probably-wont-happen.png" medium="image"><media:title type="plain">Abstract painted bridge connecting two distinct areas representing breaking down silos and building connections across teams on a warm background</media:title><media:description type="plain">Abstract painted bridge connecting two distinct areas representing breaking down silos and building connections across teams on a warm background</media:description></media:content><content:encoded><![CDATA[<img src="https://blog.talentlms.io/images/posts/super-secret-project-that-probably-wont-happen.png" alt="Abstract painted bridge connecting two distinct areas representing breaking down silos and building connections across teams on a warm background" style="max-width: 100%; height: auto; margin-bottom: 1.5em;" /><p style="margin-bottom: 1.5em; padding: 1em; background-color: #f5f5f5; border-left: 4px solid #0066cc;"><strong>Yannis Rizos</strong>, Chief Software Architect<br/>Yannis discovered programming at age 7. Soon after, he encountered Larry Wall's three virtues of laziness, impatience, and hubris, principles that have guided his approach to software development ever …</p><p>In May 2023, I sent an email to a handful of engineers inviting them to discuss a &ldquo;<em>super secret project that probably won&rsquo;t happen</em>.&rdquo; Yes, that was the actual meeting title. I figured if I was asking people to bet time on something uncertain, I should at least be honest about the odds.</p>
<p>25 mentorship pairs later, that meeting turned into the most unexpectedly durable thing I&rsquo;ve built at Epignosis. I&rsquo;m still not entirely sure why it worked.</p>
<h2 id="the-gap">The Gap</h2>
<p>At the time, Junior engineers at Epignosis only knew their immediate squad. Maybe 5 to 7 people in a 200-person company. After the pandemic, teams had drifted into their own orbits and remained there. The casual spillover that used to happen in offices was just gone. These engineers had no idea how other teams worked, what they were building, or who to even ask when they hit something outside their area. The problem wasn&rsquo;t that they lacked skill. They lacked context.</p>
<h2 id="four-skeptics-and-a-plan">Four Skeptics and a Plan</h2>
<p>I pitched something simple in that first call. Pair junior engineers across products and functions. TalentLMS padawan with an eFront mentor, and vice versa. An engineer interested in backend work with someone from DevOps. Not technical coaching about their specific codebase, but something harder to name. <em>Engineering socialization</em>, maybe. The stuff that happens when people occupy the same physical space but vanish in distributed work.</p>
<p>Someone spoke up immediately. I don&rsquo;t remember who anymore, but I remember this: they were enthusiastic about the idea and skeptical I&rsquo;d actually pull it off. Penelope, Thrasos, Christos, and Dimitris all volunteered to be the first mentors. Their skepticism wasn&rsquo;t about the concept. It was about execution. They were feeling the siloing problem too, and they thought breaking it down would take massive effort.</p>
<p>Without them stepping into something uncertain before it was proven, the program would have stayed a document in a folder somewhere. Pushed down the priority list when something urgent arrived, one more thing that seemed reasonable at the time but never quite happened.</p>
<h2 id="six-sessions-three-months">Six Sessions, Three Months</h2>
<p>We launched in June 2023 with a new cohort of interns. The format was simple on purpose. An hour every 2 weeks for 3 months, 6 sessions total. I matched the duration to the internship window partly because research says the first 90 days determine whether someone succeeds, but mostly because I didn&rsquo;t want an open-ended commitment. Open-ended things drift. Boundaries create focus.</p>
<p>I insisted on one thing: document your meetings. Not for oversight, but to help pairs track their own conversations. In time, some pairs shared sanitized versions of their notes. Those became the best onboarding material new mentors could ask for. Actual conversations, actual topics, actual problems that came up. No scorecards, no frameworks, no evaluation rubrics. Just enough structure to prevent drift without turning the whole thing into theater.</p>
<h2 id="the-conversations">The Conversations</h2>
<p>One mentee showed up to their second session worried they were underperforming. Not on any specific task. Just a general anxiety about not being good enough, not learning fast enough, not contributing enough. The mentor shared their own story about imposter syndrome. They kept coming back to that thread over the next few sessions while also covering technical stuff. How do you tell the difference between actually underperforming and just feeling like you are?</p>
<p>By session 5, they&rsquo;d covered the structured agenda and used the final meeting to just talk. They met in person that time. Career paths came up, and what actually matters long-term, and work-life balance. I&rsquo;m curious how that conversation resolved, but I&rsquo;m not part of these sessions. I only see what pairs choose to document.</p>
<p>Another pair spent an entire session mapping the org chart. Not the official one, the real one. Who actually makes architecture decisions? Who do you talk to when you need infrastructure help? Who knows why certain systems exist the way they do? This stuff doesn&rsquo;t live in any documentation. It shifts every time someone changes roles or teams are reorganized.</p>
<p>A third pair had a 20-minute detour in their second session about falsehoods programmers believe about names. Edge cases, international character sets, all the assumptions we make that blow up in production. By session 4, they were talking about how to give code review feedback that actually helps without making someone feel terrible. The mentee walked away with a plan to raise task ambiguity in their team&rsquo;s next retrospective.</p>
<p>The documentation turned out to serve a double purpose I didn&rsquo;t fully anticipate. It helps pairs track their own evolution, sure. But it also creates institutional memory without needing someone to formalize it. New mentors read notes from previous pairs and see what&rsquo;s actually possible. One pair investigated architecture testing tools and decided none of them fit. That documented failure is now a useful context for anyone else hitting the same question.</p>
<h2 id="twenty-five-pairs-later">Twenty-Five Pairs Later</h2>
<p>We&rsquo;ve run 25 mentorship pairs at this point. Some of the past mentees are now mentors themselves. I never imagined the program would endure long enough for that to happen. We&rsquo;ve expanded beyond interns to include all junior hires. The cross-product and cross-functional pairing continues. Those first 90 days still feel like the window that matters most.</p>
<p>The program has been valuable for the mentors, too. They practice one-on-one skills in a low-stakes environment, an early step in their leadership path. Explaining complex systems simply is harder than it looks. They get better at it. They remember what it&rsquo;s like to be new, which helps them improve their own team&rsquo;s onboarding. Mentee questions expose them to parts of the codebase they don&rsquo;t usually touch. Gaps in documentation that experienced people don&rsquo;t notice anymore suddenly become obvious.</p>
<p>Cross-product pairing creates what I later found out researchers call weak ties. An intern who spends 6 hours over 3 months talking to a mentor from another product builds a bridge that wouldn&rsquo;t exist otherwise. Later, when they need help with an integration problem or want to understand how another team handles something, they have an actual person to ask. <em>Conway&rsquo;s Law always in motion.</em></p>
<h2 id="deliberately-incomplete">Deliberately Incomplete</h2>
<p>Every choice here involved trade-offs I couldn&rsquo;t fully resolve. The 3-month window fits the critical adjustment period, but also just matches how long interns stay. Cross-product pairing builds bridges but sacrifices domain-specific technical mentorship. Light structure keeps people engaged but risks inconsistent execution. I chose these parameters deliberately, but I wouldn&rsquo;t claim they&rsquo;re universally right. They work for our specific context.</p>
<h2 id="from-super-secret-project-to-standard-practice">From Super Secret Project to Standard Practice</h2>
<p>From &ldquo;probably won&rsquo;t happen&rdquo; to standard practice took a few months, far less time than I thought it would. Low expectations helped. I didn&rsquo;t promise transformation or measurable outcomes. The engineers who volunteered as first mentors turned that uncertain beginning into something that stuck.</p>
<p>Now it&rsquo;s just how we work. New hires get matched. Past mentees become mentors. The first mentors were skeptical I&rsquo;d pull it off, and I understand why. Programs like this usually collapse under their own complexity or drift when they ask too much. But they were wrong about one thing. Breaking down silos didn&rsquo;t take massive effort. It took a meeting with a ridiculous title, a handful of people ready to try something uncertain, and the discipline to keep it simple.</p>
<p>Sometimes that&rsquo;s enough.</p>
]]></content:encoded></item><item><title>Full-Stack Developer 2.0</title><link>https://blog.talentlms.io/posts/full-stack-developer-2.0/</link><pubDate>Mon, 03 Nov 2025 00:00:00 +0000</pubDate><dc:creator>Vassilis Poursalidis</dc:creator><guid>https://blog.talentlms.io/posts/full-stack-developer-2.0/</guid><description>Software used to feel connected. Then it splintered. Backend, frontend, DevOps, QA. Each speaking their own language, each waiting on the others to move.
Agentic coding changes the rhythm. With AI as a second brain, developers can move across the stack, see the whole system, and keep context alive.
The result is not speed for its own sake. It is flow, coherence, and fewer walls between people who build things together.</description><enclosure url="https://blog.talentlms.io/images/posts/full-stack-developer-2.0.png" type="image/png"/><media:content url="https://blog.talentlms.io/images/posts/full-stack-developer-2.0.png" medium="image"><media:title type="plain">Abstract painted figure working across multiple screens representing full-stack development capability on a warm background</media:title><media:description type="plain">Abstract painted figure working across multiple screens representing full-stack development capability on a warm background</media:description></media:content><content:encoded><![CDATA[<img src="https://blog.talentlms.io/images/posts/full-stack-developer-2.0.png" alt="Abstract painted figure working across multiple screens representing full-stack development capability on a warm background" style="max-width: 100%; height: auto; margin-bottom: 1.5em;" /><p style="margin-bottom: 1.5em; padding: 1em; background-color: #f5f5f5; border-left: 4px solid #0066cc;"><strong>Vassilis Poursalidis</strong>, TalentLMS Engineering Director<br/>Vassilis has nearly 20 years of experience working in diverse projects in the technology industry, with a strong record of leadership and technical expertise.

He is currently an Engineering Director …</p><p>Long gone are the days when a developer will walk across the entire stack. For years, software development has been moving toward specialization. We&rsquo;ve carved ourselves into distinct tribes: infrastructure engineers who live in YAML and Terraform, architects who sketch systems in boxes and make decisions in ADRs, backend developers who speak in APIs and databases, frontend developers who breathe JavaScript and CSS, and QA engineers who guard the gates of production.</p>
<p>This specialization brought expertise. But it also brought something else: <strong>gaps</strong>.</p>
<h2 id="the-gaps-between-the-silos">The Gaps Between the Silos</h2>
<p>These gaps are where projects slow down. They&rsquo;re in the handoffs, the assumptions, the translation layers between team members and teams. They&rsquo;re in the backend developer who builds an API without considering how the frontend will consume it. The infrastructure engineer who provisions resources without understanding the application&rsquo;s actual needs. The QA engineer who writes tests disconnected from how the code actually behaves.</p>
<p>Each handoff is a game of broken telephone. Each specialized role is a potential bottleneck.</p>
<h2 id="enter-agentic-coding">Enter Agentic Coding</h2>
<p>This is where agentic coding becomes transformative, not as a way to create the mythical 10x developer, but as a way to bridge these gaps. AI-assisted development tools enable a different kind of developer: <strong>the full-stack generalist who can move fluidly across the entire stack</strong>.</p>
<p>Not because they&rsquo;re superhuman, but because they have assistance.</p>
<h2 id="the-bridge-builder-not-the-specialist">The Bridge Builder, Not the Specialist</h2>
<p>With agentic coding tools, a single developer can:</p>
<ul>
<li><strong>Provision the infrastructure layer</strong>: spinning up containers, configuring cloud resources, setting up CI/CD pipelines; not as an infrastructure specialist, but as someone who understands enough to make informed decisions with AI assistance filling the knowledge gaps.</li>
<li><strong>Design coherent architecture</strong>: mapping out system boundaries, considering scalability and maintainability, with AI helping validate approaches and spot potential issues.</li>
<li><strong>Build backend services</strong>: implementing business logic, designing database schemas, creating APIs; with AI suggesting patterns, catching edge cases, and writing boilerplate.</li>
<li><strong>Craft frontend experiences</strong>: building interfaces that actually work with the backend they just created, because there&rsquo;s no translation layer, no assumptions, just continuity.</li>
<li><strong>Think like QA</strong>: writing tests at every layer, shifting testing left (earlier in the development cycle) as much as possible, because they understand the full context of what could break and why.</li>
</ul>
<p>Importantly, these bridge builders don&rsquo;t vibe code and don&rsquo;t work in isolation. They carefully review what AI suggests and always improve how they use those tools. They&rsquo;re also supported by a network of evangelists, senior staff members, and domain experts who provide guidance, review architectural decisions, and help navigate complex technical challenges. The difference is that with AI assistance, these senior voices can focus on high-level guidance and strategic direction rather than being pulled into every implementation detail. The generalist can execute across the stack with AI handling the tactical work, while leaning on experienced colleagues for wisdom about patterns, pitfalls, and best practices that only years of experience can provide.</p>
<h2 id="the-power-of-context">The Power of Context</h2>
<p>The real advantage isn&rsquo;t speed, it&rsquo;s context. When the same person (with AI assistance) works across the stack, knowledge doesn&rsquo;t get lost in translation. The API is designed with the frontend in mind because the same person is considering both. The infrastructure is provisioned based on actual application needs, not assumptions. Tests are written by someone who intimately knows the failure modes and has incorporated the appropriate logic in the development process.</p>
<p>Agentic coding tools make this possible by:</p>
<ul>
<li><strong>Keeping you in focus</strong>: Instead of spending hours researching unfamiliar territory or waiting for a specialist to become available, developers can move forward immediately with AI assistance that understands the context of what they&rsquo;re building.</li>
<li><strong>Providing on-demand expertise</strong>: need to write a complex SQL query with proper indexing strategies? Configure a load balancer with health checks and failover logic? Set up CORS policies that balance security and functionality? The developer doesn&rsquo;t need years of specialization in databases, infrastructure, or security protocols. They need to understand the problem well enough to evaluate the solution, and AI bridges the gap between understanding and implementation.</li>
<li><strong>Maintaining consistency</strong>: when infrastructure configuration, backend logic, frontend implementation, and test suites all flow from the same contextual understanding, the result is a more coherent approach.</li>
</ul>
<p>What makes this even more powerful is that AI can leverage context from across the entire development process. When building the frontend, the AI can reference the backend implementation you just created (understanding the exact shape of API responses, the error states that might occur, and the validation rules already in place). When writing tests, it knows the actual business logic from the backend and the user interactions from the frontend. When configuring infrastructure, it understands the actual resource needs based on the application code. This contextual awareness means each layer informs and improves the rest, creating solutions that are more coherent and robust than what emerges from siloed development.</p>
<h2 id="greenfield-and-legacy-two-sides-of-the-same-coin">Greenfield and Legacy: Two Sides of the Same Coin</h2>
<p>For greenfield projects, agentic coding is remarkably effective. AI tools come equipped with knowledge of current best practices across the entire stack: modern framework patterns, security considerations, scalability approaches, testing strategies, and more. Starting from scratch, a developer with AI assistance can bootstrap a properly structured project in hours: infrastructure as code following current standards, a backend with sensible architecture, a frontend using modern patterns, and comprehensive test coverage. The AI doesn&rsquo;t just generate code; it applies the collective wisdom of thousands of well-architected projects.</p>
<p>But here&rsquo;s what makes agentic coding truly powerful: it works just as well with existing codebases. AI tools can analyze and understand the patterns, conventions, and architectural decisions already present in a project. They can be tuned to mimic the existing codebase style: the naming conventions, the error handling patterns, the testing approaches, and the architectural boundaries. When you&rsquo;re adding a new feature to a five-year-old application, the AI doesn&rsquo;t impose some idealized pattern from its training. It stays aligned with your project&rsquo;s conventions, making contributions that feel native to the codebase.</p>
<p>This adaptability means the full-stack generalist can be effective <strong>whether they&rsquo;re building something new or extending something that&rsquo;s been running in production for years</strong>.</p>
<h2 id="not-replacing-specialists">Not Replacing Specialists</h2>
<p>As mentioned earlier, experts who provide guidance remain important. Deep expertise still matters. But it&rsquo;s about recognizing that many projects don&rsquo;t need ten specialists (they need many well-rounded developers who can bridge the gaps with AI assistance).</p>
<p>For startups, small teams, and rapid prototyping, this is transformative. For larger organizations, it&rsquo;s about creating developers who can work across team boundaries, who can understand and contribute to multiple layers, who can see the whole system rather than just their piece.</p>
<p>Don&rsquo;t get me wrong, I do not advocate that we need less specialists or less developers. On the contrary, I advocate to allow specialists to do more meaningful work and to give developers new powers.</p>
<h2 id="the-skills-that-matter">The Skills That Matter</h2>
<p>The full-stack generalist in the age of agentic coding needs different skills than before:</p>
<ul>
<li><strong>Breadth over depth</strong>: understanding enough about each layer to ask the right questions, validate AI suggestions, and knowing when to call in specialists.</li>
<li><strong>Systems thinking</strong>: seeing how pieces fit together, understanding trade-offs across the stack.</li>
<li><strong>Critical evaluation</strong>: knowing when AI suggestions make sense and when they don&rsquo;t.</li>
<li><strong>Communication</strong>: being able to speak the language of different domains, even if not fluent.</li>
</ul>
<p>In a sense, the most important skills are <strong>problem-solving and the ability to communicate with fellow engineers</strong>.</p>
<h2 id="the-future-is-bridge-builders">The Future Is Bridge Builders</h2>
<p>Software development is at an inflection point. Agentic coding tools aren&rsquo;t creating 10x developers who work faster. They&rsquo;re creating bridge builders who eliminate the gaps between specialized domains.</p>
<p>This new breed of developers won&rsquo;t replace specialists. But they&rsquo;ll change how teams are structured, how quickly products can be built, and how much overhead is lost to handoffs and translation.</p>
<p>The future isn&rsquo;t about being the best backend developer or the best frontend developer. It&rsquo;s about being effective across the entire stack; not through years of specialization in each domain, but through intelligent use of AI assistance that makes generalist competence not just possible, but powerful.</p>
<p>The gaps are closing. The bridges are being built. And the developers building them might just change how we think about software development itself.</p>
<p>When you combine all of the above (the elimination of silos, the contextual awareness across layers, the ability to work on both greenfield and legacy projects, the support from senior expertise), with the significant speedup that agentic coding offers, you get compound results.</p>
<p>It&rsquo;s not just about doing more things; it&rsquo;s about doing them faster, with better integration, and with higher quality. This convergence is giving rise to what we might call <strong>v2.0</strong> of the <strong>Full-Stack Developer</strong>: someone who leverages AI to move seamlessly across the entire development lifecycle, producing cohesive solutions at a pace that would have seemed impossible just a few years ago. These developers aren&rsquo;t superhuman, but they are transformative, and <strong>they represent the future of how software gets built</strong>.</p>
]]></content:encoded></item></channel></rss>