<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[OOD]]></title><description><![CDATA[I'm writing about the philosophy of AI engineering.]]></description><link>https://blog.darinkishore.com</link><generator>Substack</generator><lastBuildDate>Thu, 07 May 2026 11:25:29 GMT</lastBuildDate><atom:link href="https://blog.darinkishore.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[darin]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[dronathon@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[dronathon@substack.com]]></itunes:email><itunes:name><![CDATA[darin]]></itunes:name></itunes:owner><itunes:author><![CDATA[darin]]></itunes:author><googleplay:owner><![CDATA[dronathon@substack.com]]></googleplay:owner><googleplay:email><![CDATA[dronathon@substack.com]]></googleplay:email><googleplay:author><![CDATA[darin]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Next Moat]]></title><description><![CDATA[How to Build Defensible AI Products When Everyone Has Access to the Same Models]]></description><link>https://blog.darinkishore.com/p/the-next-moat</link><guid isPermaLink="false">https://blog.darinkishore.com/p/the-next-moat</guid><dc:creator><![CDATA[darin]]></dc:creator><pubDate>Sat, 23 Aug 2025 00:53:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!XNh9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2979b5c5-c27a-4678-9a79-aa9bad8ccff4_1536x1155.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Midjourney</h2><p>Why hasn't anyone made a better Midjourney? Lots of people have tried. ChatGPT, BlackForestLabs, Google. But you'll never get a committed Midjourney stan to switch.</p><p>What makes them so defensibly successful?</p><p>They've built a machine that distills professional creative judgment at scale.</p><p>Their community is filled with artists, illustrators, designers&#8212;people who've trained their whole lives to know good composition from bad. Take these users and put them in an environment that constantly forces and incentivizes them to make binary decisions (read: create <em>great</em> data) creates an advantage that is very hard to replicate, even as competitors match them technically. To create a better Midjourney, you would need to somehow recreate years of accumulated artistic taste.</p><p>Midjourney's defensibility against competitors with more resources, better models (including open source) teaches us a key lesson of competing in the AI era. Even as competitors innovate technically, the companies that will win, in the short and long run, will be the ones who can operationalize taste.</p><h2>What Remains Defensible?</h2><p>AI is getting more and more advanced, and can build and scale greenfield projects capably right now. It will only get better at this over time.</p><p>This means engineering quality and sophistication is less of a moat than it used to be. When building AI features, now, almost all frontier models are good enough to do most tasks that are useful for users, and everyone has access to the same set of models.</p><p>So how does anyone differentiate themselves?</p><p>Anyone can call an LLM API. Anyone can chain three prompts together. What they <strong>cannot</strong> copy overnight is <strong>knowing what good looks like in your domain&#8212;and operationalizing it</strong>.</p><p>This is trickier than it sounds because it's not enough to just know what's good&#8212;you have to truly, deeply understand what your customers want (including what they're not telling you, and would never tell you!).</p><p>The answer is to invest resources in what is hard to copy: <strong>defining and verifying taste</strong>.</p><h3>The Bits that Are Hard to Copy</h3><ul><li><p><strong>Expert annotations</strong>: data labeled by and conversations with experts where you gain much more clarity about exactly what you need to build.</p></li><li><p><strong>Evaluator craftsmanship</strong>: the combination of rigor, expertise, and great engineering that lets you evaluate your outputs, evaluate your evaluators, and consistently improve.</p></li><li><p><strong>Institutional memory</strong>: the implicit heuristics an organization develops by shipping many iterations and seeing what breaks in the real world.</p></li><li><p><strong>Systems architecture</strong>: the design that allows each of these levers to be used to its fullest potential (there is often much, much, <em>much</em> more that you can get out of your data than you might realize).</p></li></ul><p>Together, these form what I call <strong>Judgement Capital</strong>&#8212;a compounding, reusable asset that lets you consistently measure and enforce quality in ways your competitors simply can't copy.</p><p>This grows slowly, resists leakage, and amortizes over every feature&#8212;<strong>actively used and updated evaluators compound</strong>.</p><h2>Measurement as Bottleneck</h2><p><strong>Any complex pipeline is bottlenecked by its hardest atomic subtask.</strong></p><p>For an AI pipeline in a hard-to-verify domain, that subtask is often <em>evaluating the output</em>.</p><p>For example&#8212;say you created a validator with 80 % accuracy using a simple <em>correctness</em> metric. This validator limits you to shipping only what you can prove meets that bar. There is room for LLM variance to create incredible results using your pipeline, but you will only stumble across "incredible"&#8212;never reliably distinguish, reproduce, or guarantee it at scale. <strong>What you choose to measure&#8212;and how reliably you measure it&#8212;directly caps how good your product can become.</strong></p><p>If all you check is correctness, or if you aim for merely "good enough," you can never tell if you're producing "great," "incredible," or "superhuman" outputs&#8212;they all look the same.</p><h2>What You Measure, You Optimize</h2><p>Benchmarks are saturating faster and faster.</p><p>The fastest way to improve performance in a given domain is to measure it&#8212;and you cannot improve what you cannot measure!</p><p>The chart below shows how fast benchmarks are saturating.<br>Note the trend: up and to the right.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XNh9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2979b5c5-c27a-4678-9a79-aa9bad8ccff4_1536x1155.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XNh9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2979b5c5-c27a-4678-9a79-aa9bad8ccff4_1536x1155.png 424w, https://substackcdn.com/image/fetch/$s_!XNh9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2979b5c5-c27a-4678-9a79-aa9bad8ccff4_1536x1155.png 848w, https://substackcdn.com/image/fetch/$s_!XNh9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2979b5c5-c27a-4678-9a79-aa9bad8ccff4_1536x1155.png 1272w, https://substackcdn.com/image/fetch/$s_!XNh9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2979b5c5-c27a-4678-9a79-aa9bad8ccff4_1536x1155.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XNh9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2979b5c5-c27a-4678-9a79-aa9bad8ccff4_1536x1155.png" width="1456" height="1095" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2979b5c5-c27a-4678-9a79-aa9bad8ccff4_1536x1155.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1095,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:421349,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.darinkishore.com/i/171706260?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2979b5c5-c27a-4678-9a79-aa9bad8ccff4_1536x1155.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!XNh9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2979b5c5-c27a-4678-9a79-aa9bad8ccff4_1536x1155.png 424w, https://substackcdn.com/image/fetch/$s_!XNh9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2979b5c5-c27a-4678-9a79-aa9bad8ccff4_1536x1155.png 848w, https://substackcdn.com/image/fetch/$s_!XNh9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2979b5c5-c27a-4678-9a79-aa9bad8ccff4_1536x1155.png 1272w, https://substackcdn.com/image/fetch/$s_!XNh9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2979b5c5-c27a-4678-9a79-aa9bad8ccff4_1536x1155.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><br>From this, I ask you to take away a <em>very</em> controversial idea: AI performance on clearly defined/measured tasks goes up and to the right.</p><p>LLMs and their engineers can make numbers go up. NVIDIA used a model in a loop to speed up CUDA kernels by 10&#8211;100%, AlphaEvolve broke a record by producing an entirely new algorithm for multiplying two 4x4 complex matrices, rescued 0.7% of <em>Google's</em> compute, and earlier this month, the Deep-Reinforce team published an RL training pipeline that enabled them to create a median speedup of 40% on KernelBench (for unseen kernels!). <a href="#user-content-fn-1"><sup>1</sup></a></p><p>Basically, <em>it doesn't really matter what the objective is</em>. If you can set up your problem <strong>as a Machine Learning problem</strong>, you can make your number go up.</p><h2>Machine Learning Problems</h2><p>A machine learning problem is one where there is training data, validation data to iterate with, a held-out testing set (to approximate real world performance) and one or more metrics to optimize.</p><p>This is how models are trained; it's an incredibly general formulation that lets you make number go up on any task you can define. You can only improve towards measurable signals.</p><p>I wrote this piece to urge you to treat your evaluation system as a machine learning problem. Give it <strong>at minimum</strong> an equal amount of time as your core AI feature.</p><h2>A Brief Blueprint</h2><p>When you start building AI systems, it's hard to evaluate them because there is a lot of data, you don't know what correct looks like, it's hard to be comprehensive, and you often can't one-shot (or even ten-shot) a prompt that properly evaluates your outputs.</p><p>Or maybe the generality of your system means you need ten different prompts and a classifier.</p><p>Maybe you just have 10 or 20 unversioned test cases that you rotate between and hand-judge when changing models.</p><p>So how do you start?</p><p>First, figure out what good looks like in your domain! This can come from your intuition for now. Think of just ONE thing that means you have a good output. The more concrete you can get it, the better, but it's okay if it's not super pinned down for now. It's very important to <strong>keep it binary</strong>. Don't be fooled into a 1&#8211;5 or percent-based score, it's much harder to calibrate. A binary score lets you leverage your fuzzy intuition that you've built up from working with your product closely.</p><p>Find some inputs and outputs. Score the outputs yourself&#8212;something as simple as <code>is_good</code> works.</p><p>Calibrate your rewards. Improve them. Watch your numbers go up.</p><h2>The Compounding Advantage</h2><p>If AI is part of your core value proposition and you want to leverage increasing model capabilities, spend <em>at least</em> as much time designing and building your system's judgement capital as you do the core product.</p><p>Here's why this investment compounds:</p><ul><li><p>Better evaluators &#8594; Better outputs &#8594; More valuable data &#8594; Even better evaluators</p></li><li><p>Each model improvement multiplies against your evaluation infrastructure</p></li><li><p>Your competitors have to recreate years of accumulated taste from scratch</p></li></ul><p>While they're stuck playing catch-up on generation quality, you're already optimizing for dimensions of quality they haven't even discovered yet.</p><h2>Start Today</h2><p>The best time to start building Judgement Capital was when you shipped your first AI feature. The second best time is now.</p><p>Start simple:</p><ol><li><p>Pick one binary quality metric that matters to your users</p></li><li><p>Label 50&#8211;100 examples yourself</p></li><li><p>Build an evaluator</p></li><li><p>Watch your numbers go up</p></li></ol><p>Remember: In a world where everyone can generate, only those who can verify will thrive.</p><h3>Future Posts</h3><p>This is part of an (at least) 4 part series on LLM engineering + Evals. The next post is about the user experience of AI engineering, how the default way of doing it leads to choosing local maximums that hurt you in the long run, and the one decision that gets you in a more high-quality and maintainable space for any LM feature from the start.</p><p>I really, really, <em>really</em> like DSPy (hint) and will be doing a lot of writing on that. I have lots typed up already, and I'm figuring out idea grouping and scoping.</p><p>My goal here is to show you how I think about AI Engineering problems&#8212;teach a man to fish type beat. There are ways of doing this that avoid a lot of pain you run into later down the line<br></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.darinkishore.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.darinkishore.com/subscribe?"><span>Subscribe now</span></a></p><p></p><div><hr></div><h2>Next Steps</h2><p>If you found this valuable, it would mean a lot to me if you shared it with people you think would find it valuable too. </p><p>Also, I&#8217;m building a product based off of this philosophy! Click here: <a href="https://testingmcps.com">Testing MCPs</a></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.darinkishore.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading OOD! Subscribe to receive new posts and keep up with my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[What to Expect]]></title><description><![CDATA[A brief post about who I am, what this blog/publication is, and what you might get out of it.]]></description><link>https://blog.darinkishore.com/p/what-to-expect</link><guid isPermaLink="false">https://blog.darinkishore.com/p/what-to-expect</guid><dc:creator><![CDATA[darin]]></dc:creator><pubDate>Wed, 23 Jul 2025 19:45:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PUJ0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37385fd4-ee92-4e86-8b75-31f23843e4c2_2464x1343.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Why this, why now </h3><p>I want to have a home for my thoughts. I am recently discovering how much writing helps to clarify thoughts.</p><p>I have accumulated many, many thoughts on AI engineering and building reliable systems. I have also really enjoyed the writing I have found on Substack, and I would like to contribute and share my own perspective&#8212;Substack provides an intuitive way to get feedback and thoughts on my ideas, and I really want more of that.</p><p>I often have conversations where I see people doing things that caused me a lot of pain as I tried to scale up my AI pipelines. I intend to share ways to avoid this pain that work with the constraints many people face as they try to scale these systems, up. AI Engineering is a beautiful, emerging field that is a very fun mix of engineering, science, and the humanities&#8212;there has never been any field like this. I want to explore and teach it as well as I can.</p><p>For starters, I have an ~6 post pipeline where I distill a lot of the intuition, strategy, and philosophy of how I think about AI engineering. This starts with basic strategy, goes into some psychology/UX, tackles abstractions and how to find them, and basically covers the whole cycle of creating an AI product/feature/xyz.</p><p>I&#8217;ve noticed that as I build these, I have to create more because I want to keep what I&#8217;m saying to two or three main ideas per piece. </p><p>The writing will start off a little rough and unrefined as I hone my craft, but almost all of it will be from me. LLMs are incredibly helpful for writing, but their voice is bog-standard, generic, and hard to trust. You will get my original thoughts.</p><h3>Expectations</h3><p>You should expect a post roughly every 10 to 14 days. I admire WaitButWhy; I want my posts to be in-depth but not longer than necessary to communicate their ideas.</p><p>If you would like to pledge, please do so, but for now, you won&#8217;t get anything different! I&#8217;m very new to this, it would be very sweet, and I would deeply appreciate your support. Feedback is really important to me&#8212;I&#8217;d like to know if you found anything here helpful, or if you bounced off things.</p><p></p><h3>Personal Background</h3><p>I&#8217;m Darin. </p><p>I recently graduated, dropped out of my Masters, and just moved to San Francisco with a year of runway.</p><p>The first time I learned about AI was when I was 13. I read an incredible WaitButWhy post on the exponential curve of AI improvement. This fascinated me, but I didn&#8217;t know what to do with it, but it stuck with me for a while. I took a class my freshman year of college called <em>The Limits of Being Human</em>, and for the final presentation, posited that the limits of being human were no longer being set by our biology or social dynamics, but by the tools we create.</p><p>That summer, I wanted to make a "universal recommendation system&#8221; and I was realizing I basically had no CS or AI knowledge. I wanted to get good at both, but I had always shied away from technical topics because they scared me. I then read <em>So Good They Can&#8217;t Ignore You</em>, a book that told me that Career Satisfaction is achieved by just picking something and getting good at it. I then learned of GPT-3 in a conversation between Sam Altman and Ezra Klein. This was <em>the most incredible</em> thing I had ever heard of&#8212;a machine that can <strong>actually fucking learn!</strong> Combining these two threads, I decided to spend the foreseeable future &#8220;getting good at AI&#8221; (and CS!) because I wanted to create with it, and because it was clearly the most important thing in the world to learn about. </p><p>I spent a year and a half teaching myself the mathematical foundations, changed campuses, and had the privilege of diving firsthand into creating AI programs and starting NLP research with Dr. Jinho Choi. </p><p>Then, I jumped headfirst into two years of synthetic data generation in the mental health space&#8212;a majority of which was spent working on intractable problems given model capabilities. I <strong>bootstrapped my data from scratch with no ground truth</strong>, and have experienced many, <em>many</em> hours of pain while working to get anything reliably useful from LLM pipelines. </p><p>Seeing all the ways these systems can fail inevitably teaches you how to build pressure-tested systems that work. Achieving this takes innovation on both the technical and human levels: I was the sole data labeler for the final project iteration, meaning nothing could succeed unless I maintained rigorous self-consistency. Due to extreme resource constraints, I had to invest heavily in improving the user experience of labeling, scaling my own intuition, and rapidly iterating based on small feedback loops. </p><p>This is a very unforgiving experience I would not recommend to anybody, but after running the LLM engineering gauntlet on nightmare difficulty, I finally got some success (88 F1/n=150) on reliably distinguishing the presence of mental health (DSM) criteria in reddit posts. </p><p>On a more personal note, I love creativity in all its forms. I&#8217;m exploring writing right now, but it is one of many mediums. I love to learn and to talk to people. I enjoy ceramics&#8212;it gives me something to do with my hands, and is grounding and visceral. I also enjoy biking, because it&#8217;s a much more personal way to explore the place I&#8217;m in (vs, eg. a car). </p><p>I moved to SF because it&#8217;s where the weirdos are. There&#8217;s nowhere else to be if you want to be surrounded by interesting people doing incredible things. I&#8217;ve spent three weeks here, and I <strong>love</strong> it. </p><p>I don&#8217;t know what to work on yet, I don&#8217;t know how I want to spend my time, but I want to apply my competencies in ways that feel important and fun. I really care about human-AI collaboration, and I want to see how we can work, play, create together in new ways. Most of all, I want to create meaningful tools and systems that help people navigate life&#8217;s complexity, because I know firsthand how challenging that can be!</p><h3>Community</h3><p>This will be a thoughtful, kind, helpful space. Objections should be surfaced and discussed. I really want the people who read this to share how things resonate with them, what they found helpful, and what their experience or intuition disagrees with.</p><p>I don&#8217;t take myself too seriously. I hope you don&#8217;t either.</p><h3></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PUJ0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37385fd4-ee92-4e86-8b75-31f23843e4c2_2464x1343.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PUJ0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37385fd4-ee92-4e86-8b75-31f23843e4c2_2464x1343.png 424w, https://substackcdn.com/image/fetch/$s_!PUJ0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37385fd4-ee92-4e86-8b75-31f23843e4c2_2464x1343.png 848w, https://substackcdn.com/image/fetch/$s_!PUJ0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37385fd4-ee92-4e86-8b75-31f23843e4c2_2464x1343.png 1272w, https://substackcdn.com/image/fetch/$s_!PUJ0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37385fd4-ee92-4e86-8b75-31f23843e4c2_2464x1343.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PUJ0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37385fd4-ee92-4e86-8b75-31f23843e4c2_2464x1343.png" width="2464" height="1343" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/37385fd4-ee92-4e86-8b75-31f23843e4c2_2464x1343.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1343,&quot;width&quot;:2464,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7724137,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://dronathon.substack.com/i/168805369?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F310ce353-0108-4311-ab43-e4bf381eb1c4_2464x1856.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PUJ0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37385fd4-ee92-4e86-8b75-31f23843e4c2_2464x1343.png 424w, https://substackcdn.com/image/fetch/$s_!PUJ0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37385fd4-ee92-4e86-8b75-31f23843e4c2_2464x1343.png 848w, https://substackcdn.com/image/fetch/$s_!PUJ0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37385fd4-ee92-4e86-8b75-31f23843e4c2_2464x1343.png 1272w, https://substackcdn.com/image/fetch/$s_!PUJ0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37385fd4-ee92-4e86-8b75-31f23843e4c2_2464x1343.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.darinkishore.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading OOD! Subscribe (it&#8217;s free!) to get thoughtful explorations and useful insights about building things that work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item></channel></rss>