In May 2023, I sent an email to a handful of engineers inviting them to discuss a “super secret project that probably won’t happen.” Yes, that was the actual meeting title. I figured if I was asking people to bet time on something uncertain, I should at least be honest about the odds.

25 mentorship pairs later, that meeting turned into the most unexpectedly durable thing I’ve built at Epignosis. I’m still not entirely sure why it worked.

The Gap

At the time, Junior engineers at Epignosis only knew their immediate squad. Maybe 5 to 7 people in a 200-person company. After the pandemic, teams had drifted into their own orbits and remained there. The casual spillover that used to happen in offices was just gone. These engineers had no idea how other teams worked, what they were building, or who to even ask when they hit something outside their area. The problem wasn’t that they lacked skill. They lacked context.

Four Skeptics and a Plan

I pitched something simple in that first call. Pair junior engineers across products and functions. TalentLMS padawan with an eFront mentor, and vice versa. An engineer interested in backend work with someone from DevOps. Not technical coaching about their specific codebase, but something harder to name. Engineering socialization, maybe. The stuff that happens when people occupy the same physical space but vanish in distributed work.

Someone spoke up immediately. I don’t remember who anymore, but I remember this: they were enthusiastic about the idea and skeptical I’d actually pull it off. Penelope, Thrasos, Christos, and Dimitris all volunteered to be the first mentors. Their skepticism wasn’t about the concept. It was about execution. They were feeling the siloing problem too, and they thought breaking it down would take massive effort.

Without them stepping into something uncertain before it was proven, the program would have stayed a document in a folder somewhere. Pushed down the priority list when something urgent arrived, one more thing that seemed reasonable at the time but never quite happened.

Six Sessions, Three Months

We launched in June 2023 with a new cohort of interns. The format was simple on purpose. An hour every 2 weeks for 3 months, 6 sessions total. I matched the duration to the internship window partly because research says the first 90 days determine whether someone succeeds, but mostly because I didn’t want an open-ended commitment. Open-ended things drift. Boundaries create focus.

I insisted on one thing: document your meetings. Not for oversight, but to help pairs track their own conversations. In time, some pairs shared sanitized versions of their notes. Those became the best onboarding material new mentors could ask for. Actual conversations, actual topics, actual problems that came up. No scorecards, no frameworks, no evaluation rubrics. Just enough structure to prevent drift without turning the whole thing into theater.

The Conversations

One mentee showed up to their second session worried they were underperforming. Not on any specific task. Just a general anxiety about not being good enough, not learning fast enough, not contributing enough. The mentor shared their own story about imposter syndrome. They kept coming back to that thread over the next few sessions while also covering technical stuff. How do you tell the difference between actually underperforming and just feeling like you are?

By session 5, they’d covered the structured agenda and used the final meeting to just talk. They met in person that time. Career paths came up, and what actually matters long-term, and work-life balance. I’m curious how that conversation resolved, but I’m not part of these sessions. I only see what pairs choose to document.

Another pair spent an entire session mapping the org chart. Not the official one, the real one. Who actually makes architecture decisions? Who do you talk to when you need infrastructure help? Who knows why certain systems exist the way they do? This stuff doesn’t live in any documentation. It shifts every time someone changes roles or teams are reorganized.

A third pair had a 20-minute detour in their second session about falsehoods programmers believe about names. Edge cases, international character sets, all the assumptions we make that blow up in production. By session 4, they were talking about how to give code review feedback that actually helps without making someone feel terrible. The mentee walked away with a plan to raise task ambiguity in their team’s next retrospective.

The documentation turned out to serve a double purpose I didn’t fully anticipate. It helps pairs track their own evolution, sure. But it also creates institutional memory without needing someone to formalize it. New mentors read notes from previous pairs and see what’s actually possible. One pair investigated architecture testing tools and decided none of them fit. That documented failure is now a useful context for anyone else hitting the same question.

Twenty-Five Pairs Later

We’ve run 25 mentorship pairs at this point. Some of the past mentees are now mentors themselves. I never imagined the program would endure long enough for that to happen. We’ve expanded beyond interns to include all junior hires. The cross-product and cross-functional pairing continues. Those first 90 days still feel like the window that matters most.

The program has been valuable for the mentors, too. They practice one-on-one skills in a low-stakes environment, an early step in their leadership path. Explaining complex systems simply is harder than it looks. They get better at it. They remember what it’s like to be new, which helps them improve their own team’s onboarding. Mentee questions expose them to parts of the codebase they don’t usually touch. Gaps in documentation that experienced people don’t notice anymore suddenly become obvious.

Cross-product pairing creates what I later found out researchers call weak ties. An intern who spends 6 hours over 3 months talking to a mentor from another product builds a bridge that wouldn’t exist otherwise. Later, when they need help with an integration problem or want to understand how another team handles something, they have an actual person to ask. Conway’s Law always in motion.

Deliberately Incomplete

Every choice here involved trade-offs I couldn’t fully resolve. The 3-month window fits the critical adjustment period, but also just matches how long interns stay. Cross-product pairing builds bridges but sacrifices domain-specific technical mentorship. Light structure keeps people engaged but risks inconsistent execution. I chose these parameters deliberately, but I wouldn’t claim they’re universally right. They work for our specific context.

From Super Secret Project to Standard Practice

From “probably won’t happen” to standard practice took a few months, far less time than I thought it would. Low expectations helped. I didn’t promise transformation or measurable outcomes. The engineers who volunteered as first mentors turned that uncertain beginning into something that stuck.

Now it’s just how we work. New hires get matched. Past mentees become mentors. The first mentors were skeptical I’d pull it off, and I understand why. Programs like this usually collapse under their own complexity or drift when they ask too much. But they were wrong about one thing. Breaking down silos didn’t take massive effort. It took a meeting with a ridiculous title, a handful of people ready to try something uncertain, and the discipline to keep it simple.

Sometimes that’s enough.