LLM Error Email Lands Internship: Startup Hustler's Unconventional Outreach Rewarded After Hitting Rate Limits
The Accidental Pitch: LLM Error Spurs Unconventional Internship Opportunity
The inbox of any busy founder or executive is a graveyard of cold outreach—most messages are deleted unread, filtered into oblivion by algorithms designed to protect their attention. Yet, on February 5, 2026, @gregkamradt encountered an unsolicited internship request that defied every expectation. Instead of a carefully crafted pitch, the body of the email contained nothing more than a raw, technical error message generated by a Large Language Model (LLM). Rather than dismissing the applicant, Sanjit, for what appeared to be a catastrophic technical oversight, Kamradt chose to engage. This decision to prioritize the underlying effort over the surface-level execution has since become a talking point in the community regarding initiative and hustle. The core lesson emerging from this seemingly flawed communication is the premium placed on demonstrating agency in the pursuit of ambitious goals, even when the tools used to achieve that goal briefly malfunction in public view.
Kamradt’s decision to initiate a call with the applicant, identified as @sanjitr11, underscored a critical philosophy: look beyond the noise to find the signal. For many recruiters, an email body filled with code errors signals incompetence or lack of attention to detail. For Kamradt, the error message hinted at something else entirely: an ambitious attempt at automation that went slightly awry under pressure. This willingness to meet the applicant halfway—to investigate the story behind the failed automation—was the very quality that ultimately earned the applicant consideration for an internship.
This incident serves as a stark reminder that in fast-moving tech sectors, particularly when dealing with early-stage startups, perfection is often the enemy of progress. The applicant was actively "testing in prod," deploying self-coded solutions to reach his target demographic. While the testing failed in presentation, it succeeded in achieving the primary objective: securing a response from the desired recipient.
Decoding the Error: The Story Behind the Automated Outreach
Sanjit’s journey to that fateful email was marked by a distinct pivot in strategy. Initially casting a wide net for roles within general tech companies, he rapidly refined his target. ### The Shift to High-Velocity Startup Environments
His focus narrowed dramatically toward high-growth, early-stage ventures, specifically targeting founders associated with Y Combinator (YC) batches. At the time of the incident, with the W26 batch underway, this meant seeking out individuals actively engaged in scaling their new ventures, like those running ARC. This strategic narrowing required a more personalized, yet scalable, outreach mechanism than standard job board applications.
The Mechanics of Self-Coded Automation
To manage the volume required by this focused approach, Sanjit opted to build his own outreach solution. This was not a purchase of a standard marketing tool; the entire process, from list curation to message deployment, was coded up by himself. This reliance on self-developed tools immediately signals a high degree of technical comfort and a proactive approach to problem-solving.
The LLM Stack Under Strain
The visible error message was a direct consequence of the tools Sanjit was employing to generate personalized content at scale. Initially, he leveraged Google’s Gemini model for generating the message content. However, the demands of bulk sending quickly exposed the limits of the free tier, leading him to confront external constraints: he hit rate limits. As a pragmatic fallback, he pivoted to an alternative infrastructure. He selected Groq—known for its exceptionally low-latency inference—as his backup engine, specifically employing the Llama model due to its reported speed. The LLM error that landed in Kamradt’s inbox was, therefore, the byproduct of hitting a quota on Gemini, forcing a mid-flight switch to Groq/Llama, which resulted in the raw error output rather than the intended prose.
"Why did he use Llama and Groq?" The answer reveals not a lack of skill, but resourcefulness: Groq was the fallback LLM after Gemini rate limits were encountered, chosen specifically for Llama’s speed.
The Counterintuitive Success Rate: Errors Outperform Perfection
The most compelling piece of data shared by Kamradt challenges conventional wisdom regarding professional communication. The sample size, though small, is statistically fascinating when viewed through the lens of attention economics.
Sanjit deployed 30 distinct outreach emails in total, split evenly between two conditions:
| Email Type | Quantity Sent | Responses Received |
|---|---|---|
| Clean/Flawless | 15 | 0 |
| Erroneous/LLM Error | 15 | 3 (Including Kamradt) |
The data suggests a striking anomaly: the perfectly crafted emails yielded zero responses, while the emails containing obvious technical errors generated a 20% response rate.
This statistical deviation implies that the error itself acted as an incredibly effective attention-grabbing anomaly. In a sea of polished, templated communications, the raw LLM failure broke the pattern recognition of the recipient’s filtering process, forcing a moment of genuine curiosity. The recipient was compelled to investigate what the error was, rather than automatically dismissing the content.
Lessons Learned: Agency Over Accuracy in the Hustle
The ultimate reward wasn't just the internship consideration; it was the broader validation of Sanjit's proactive effort by the community. Following Kamradt’s sharing of the story, five different individuals independently reached out offering Sanjit assistance, credits, or even alternative internship openings. This ripple effect highlights the power of vulnerability paired with visible effort.
The core takeaway emphasized by Kamradt is the rewarding of demonstrable agency. The applicant didn't just say he was capable of building tools; he showed it by coding an outreach system, even if the deployment failed momentarily. This is the essence of the principle: "Judge someone by their prompt, not by their rate limit errors."
It is crucial to normalize the messy reality of innovation. When engineers and operators are encouraged to "test in prod," mistakes are an inevitable byproduct. The key is whether the initiative itself demonstrates an understanding of the goal and the willingness to iterate rapidly. Sanjit’s error, born from pushing boundaries with self-coded automation, was ultimately a better indicator of his potential than a series of flawless, yet ultimately ignored, template emails. In the modern digital hustle, proactive effort, even when imperfectly executed, speaks louder than passive professionalism.
Source: https://x.com/gregkamradt/status/2019453313616417202
This report is based on the digital updates shared on X. We've synthesized the core insights to keep you ahead of the marketing curve.
