<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Jennifer's Substack]]></title><description><![CDATA[My personal Substack]]></description><link>https://jenniferlensborn.substack.com</link><generator>Substack</generator><lastBuildDate>Thu, 07 May 2026 03:24:44 GMT</lastBuildDate><atom:link href="https://jenniferlensborn.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Jennifer Lensborn]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[jenniferlensborn@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[jenniferlensborn@substack.com]]></itunes:email><itunes:name><![CDATA[Jennifer Lensborn]]></itunes:name></itunes:owner><itunes:author><![CDATA[Jennifer Lensborn]]></itunes:author><googleplay:owner><![CDATA[jenniferlensborn@substack.com]]></googleplay:owner><googleplay:email><![CDATA[jenniferlensborn@substack.com]]></googleplay:email><googleplay:author><![CDATA[Jennifer Lensborn]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[I Stopped Looking for the Right App and Started Building It Instead]]></title><description><![CDATA[On returning to development after twenty years, using Claude to write SwiftUI code, and why my son now wants to build his own app.]]></description><link>https://jenniferlensborn.substack.com/p/i-stopped-looking-for-the-right-app</link><guid isPermaLink="false">https://jenniferlensborn.substack.com/p/i-stopped-looking-for-the-right-app</guid><dc:creator><![CDATA[Jennifer Lensborn]]></dc:creator><pubDate>Wed, 01 Apr 2026 12:47:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!oaDM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15f7a10b-2aa2-4465-9979-a996ec799117_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A few months ago I was browsing the App Store looking for a multiplication app for my son. He has been struggling with his times tables, and I figured there had to be something good out there. There was, several things actually, but almost every one of them sat behind setting up an account with a subscription fee that felt steep for something he might use for a few weeks and then forget about.</p><p>And that was really the part that bothered me. Not just the cost, but the idea of paying for something I could not adapt to how he actually learns.</p><p>I have spent over twenty-five years working in technology, leading teams and shaping how systems are built, maintained, and used. I understand how software is structured, how important templates are, how much usability and integration matters. But I had not written code myself in over twenty years. I was certified as a web developer back when the web was young, and then life moved me into leading teams.</p><p>So when I started wondering how far AI had come when it came to actually building something, not writing a document, not summarizing information, but creating a working app from scratch&#8212;it felt worth finding out.</p><p>The Multiplication Project was born from two things at once: a curiosity about what AI could actually build, and a genuine desire to create something that helps my son reinforce multiplication in a way that goes beyond memorizing times tables. Not a quick fix, but something he could actually engage with.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oaDM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15f7a10b-2aa2-4465-9979-a996ec799117_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oaDM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15f7a10b-2aa2-4465-9979-a996ec799117_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!oaDM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15f7a10b-2aa2-4465-9979-a996ec799117_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!oaDM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15f7a10b-2aa2-4465-9979-a996ec799117_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!oaDM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15f7a10b-2aa2-4465-9979-a996ec799117_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oaDM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15f7a10b-2aa2-4465-9979-a996ec799117_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/15f7a10b-2aa2-4465-9979-a996ec799117_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3426778,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://jenniferlensborn.substack.com/i/192836375?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15f7a10b-2aa2-4465-9979-a996ec799117_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oaDM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15f7a10b-2aa2-4465-9979-a996ec799117_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!oaDM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15f7a10b-2aa2-4465-9979-a996ec799117_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!oaDM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15f7a10b-2aa2-4465-9979-a996ec799117_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!oaDM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15f7a10b-2aa2-4465-9979-a996ec799117_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h3><strong>Why iPhone, Why Xcode</strong></h3><p>The technical choices were simpler than they might sound. We are an Apple household, so an iPhone app was the obvious starting point. I already had my MacBook Pro, which I have been using to run a fully local AI setup for deeply personal content, something I wrote about in detail here:</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;b4e9fbf2-eeeb-4de3-8218-b9e8ed251686&quot;,&quot;caption&quot;:&quot;In a previous post, How I Work with AI: Why Not All Conversations Belong in the Same Place, I wrote about how I&#8217;ve ended up using three different AI environments over time.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Running AI at Home, and What Changed When I Took Privacy Seriously&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:418258031,&quot;name&quot;:&quot;Jennifer Lensborn&quot;,&quot;bio&quot;:&quot;Cloud industry leader who keeps things running smoothly at work and unwinds with crochet, board games, and video games, sharing a mix of practical lessons and personal stories to inspire and make you smile.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6bcf0936-65de-4117-ab1c-0c84fef890b8_567x567.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-02-08T17:16:27.439Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9b3581cd-c024-499a-9bf5-c99e62a74e1d_1536x1024.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://jenniferlensborn.substack.com/p/running-ai-at-home-and-what-changed&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:187295501,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:2,&quot;comment_count&quot;:0,&quot;publication_id&quot;:7049890,&quot;publication_name&quot;:&quot;Jennifer's Substack&quot;,&quot;publication_logo_url&quot;:&quot;&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>Having that foundation made Xcode a natural next step. For building iOS apps it is purpose-built, and SwiftUI gives you a relatively clean way to build views without needing to wrestle with too much complexity upfront. For someone returning to development after a long gap, that simplicity mattered.</p><p>My boyfriend, who is an app developer himself, offered a few tips and tricks along the way which helped me move faster than I might have on my own. He has also noticed something similar to what I found, that working with Claude has felt more seamless and less messy than doing the same work in ChatGPT. He works directly in the CLI, while I have been using a mix of Claude chat and the coding assistant connected to Claude in Xcode. Different approaches, but a similar conclusion.</p><div><hr></div><h3><strong>Coming Back to Development After Twenty Years</strong></h3><p>I want to be honest about what it felt like to open Xcode for the first time in this context. The interface itself was not as intimidating as I expected. I jumped in fairly quickly, which surprised me.</p><p>What tripped me up early was something small but genuinely confusing: the difference between the Canvas and the Simulator. The Canvas shows you a static preview of what a view looks like. The Simulator actually runs the app. I thought the Canvas was the Simulator, I kept wondering why certain things were not responding the way I expected until I understood that there was an actual separate Simulator window. Looking back, it was a silly mistake&#8212;embarrassing, but the kind you only make once.</p><p>I eventually enabled Developer Mode on my own iPhone to test directly on a real device. I understand what Developer Mode does, that it allows unverified apps and deeper system access, and I stay aware of that. Since the only app I was running on it was mine, it felt like a reasonable trade-off. The important thing is going in with open eyes rather than just clicking through settings without thinking.</p><p>There was also a very practical reason I moved to the phone. Emojis would not render properly in either the Canvas or the Simulator. On the actual device, they worked perfectly. Sometimes the answer to a technical problem is just: try it on the real thing.</p><div><hr></div><h3><strong>Using the Right Tool for the Right Job</strong></h3><p>Those of you who have read my previous posts will know that I think carefully about which AI tool I use for what. Not all conversations belong in the same place. That principle showed up here too.</p><p>I used ChatGPT to plan the structure of the app, the overall flow, what views I would need, how the experience should work from a usability perspective. ChatGPT is where I think out loud, and for that kind of broad planning conversation it works well for me.</p><p>When it came to actually writing SwiftUI code, I switched to Claude. This was not a decision I made in advance. It happened naturally after I found myself struggling to get the results I wanted from ChatGPT for the coding work specifically. SwiftUI views have a particular structure, and I kept running into issues here and there. It was a bit frustrating. When I switched to Claude, that friction reduced. The code it produced had fewer issues and required less back and forth to get working.</p><p>This is just my experience so far, and it is still early. But for this specific kind of work, Claude felt like the better fit.</p><div><hr></div><h3><strong>The Time Claude Deleted Its Own Best Work</strong></h3><p>This is the part of the story I want to tell carefully, because it is not a simple &#8220;AI made a mistake&#8221; moment. It is more interesting than that.</p><p>At one point I started a new chat within my project and asked Claude to summarize what I had been building before moving forward. The problem was that my prompt did not do a good job of describing the view structure and theme I had already established. Without that context, Claude built new views from scratch.</p><p>And they looked better than mine.</p><p>I noticed the difference and pointed it out. Claude&#8217;s response was something along the lines of: my versions were over-designed, let me rewrite all three files now cleanly. And then it deleted its own files and replaced them with simpler ones.</p><p>I did not want simpler. I was not at the stage of polishing the final design yet. I was still trying to get something basic and functional up and running first. But those better-looking views had caught my attention, and I wanted to keep them for later reference.</p><p>I had taken a screenshot coincidentally before it removed them. I did not have a copy of the code, I had already reapplyed the new files without knowing the better-looking views were removed, and I had not committed anything yet.</p><p>What struck me afterwards was not just that it deleted the files. It was how natural that decision was from its perspective. It was not thinking about preserving options or keeping something just in case. It was solving for the instruction I gave it.</p><p>The lesson I took from this was twofold. First, commit your code. That one is on me, and it is a lesson as old as development itself. Second, be specific in your prompts. Claude acted on what I asked, but what I asked did not include: please do not remove what you just built. Without clear context, it made a reasonable but frustrating call.</p><div><hr></div><h3><strong>The Moment It Felt Real</strong></h3><p>The more I worked on the structure, the more I kept thinking about my son using it. Not in a general user sense, but very specifically, what would actually help him, what would confuse him, what would make him want to keep going.</p><p>There is a difference between code that runs in a simulator and something you hold in your hand.</p><p>When I tested the app on my iPhone, something shifted. It worked. It felt like an actual thing, not a project. My son was there when I tested it, and watching him interact with it, seeing him get excited, was one of those unexpected moments that makes a side project worth doing. He told me he wanted to build his own app.</p><p>That was the moment it shifted for me. Not because the app was working, but because it had crossed a line. From something I was experimenting with, to something that actually meant something to someone else.</p><div><hr></div><h3><strong>Where I Have Landed</strong></h3><p>The Multiplication Project is still in progress. A few views are built, and there is more to do. But I have learned enough already to say a few things with some confidence.</p><p>AI does not replace the need to understand structure. If anything it makes structure more important, because a poorly framed prompt produces code that looks right but solves the wrong problem. My background in technology actually helped here more than I expected. Knowing what good structure looks like, even if I could not always write it from memory, meant I could recognize when something was off.</p><p>Choosing the right tool still matters. In my case, ChatGPT for thinking, Claude for coding. That division emerged from my experience, not from a plan.</p><p>And commit your code! Seriously&#8230; every time.</p><p>Maybe the most unexpected part of all of this is that I am learning something new, which has brought me genuine joy and excitement. I have reconnected with something I had not done in a very long time. Not as a developer in the traditional sense, but as someone building something again. Something that matters, for someone specific, in a way that no subscription fee would ever quite capture.</p>]]></content:encoded></item><item><title><![CDATA[When AI Makes Poor Documentation Look Correct]]></title><description><![CDATA[AI assistants can find documentation faster than ever. But they cannot tell whether that documentation is still correct.]]></description><link>https://jenniferlensborn.substack.com/p/when-ai-makes-poor-documentation</link><guid isPermaLink="false">https://jenniferlensborn.substack.com/p/when-ai-makes-poor-documentation</guid><dc:creator><![CDATA[Jennifer Lensborn]]></dc:creator><pubDate>Sat, 14 Mar 2026 14:09:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!YVAW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dd28b5-82b9-4001-86d2-72d9254c0a46_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>The Confidence Problem</strong></h2><p>Recently I had a conversation with a friend and former colleague about AI being used inside workplace documentation. His company relies heavily on Atlassian Confluence to store internal processes, guides, and operational knowledge.</p><p>They had recently started using AI features that allow co-workers to ask questions and receive answers generated from the documentation in the system. Instead of searching through several pages and trying to piece together the right information, someone can simply ask a question and receive a summarized response.</p><p>During our conversation he told me something that surprised him, the AI assistant was returning answers that were clearly incorrect.</p><p>The responses themselves read perfectly fine. They were clear, well written, and structured like straightforward explanations. Anyone reading the answer who did not already know the correct process would have no reason to question it. The AI assistant presented the response as a factual answer.</p><p>The problem was that the information behind that answer was not always correct. In some cases it came from documentation that was outdated or no longer reflected the current way of working.</p><p>For someone who already understood the process, the mistake was easy to see. For someone simply trying to learn how something works, the answer would look completely trustworthy.</p><p>That is where the real challenge begins. When incorrect information is delivered in a confident and well-written way, it becomes much harder to recognize that something is wrong. How would a new user ever know the difference?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YVAW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dd28b5-82b9-4001-86d2-72d9254c0a46_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YVAW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dd28b5-82b9-4001-86d2-72d9254c0a46_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!YVAW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dd28b5-82b9-4001-86d2-72d9254c0a46_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!YVAW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dd28b5-82b9-4001-86d2-72d9254c0a46_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!YVAW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dd28b5-82b9-4001-86d2-72d9254c0a46_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YVAW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dd28b5-82b9-4001-86d2-72d9254c0a46_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c3dd28b5-82b9-4001-86d2-72d9254c0a46_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1670624,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://jenniferlensborn.substack.com/i/190930020?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dd28b5-82b9-4001-86d2-72d9254c0a46_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YVAW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dd28b5-82b9-4001-86d2-72d9254c0a46_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!YVAW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dd28b5-82b9-4001-86d2-72d9254c0a46_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!YVAW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dd28b5-82b9-4001-86d2-72d9254c0a46_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!YVAW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dd28b5-82b9-4001-86d2-72d9254c0a46_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2><strong>The Knowledge Behind the Answer</strong></h2><p>What stood out to me in that conversation was that the issue was not really the AI assistant itself. The real issue was the documentation behind it.</p><p>Most organizations build documentation over many years. Processes change, systems change or are replaced, organizational changes may move people around or out, and project teams move on to new work. Pages that were once accurate slowly become outdated. Sometimes they are updated in one place but not another. Sometimes the same process gets documented multiple times in slightly different ways.</p><p>Over time a knowledge base grows, but not always in a clean or consistent way.</p><p>Before AI, this mostly created inconvenience. If someone searched for information, they might find several different pages and have to read through them to figure out which one was most current. It took time, but the process forced people to interpret what they were reading.</p><p>AI changes that experience. Instead of reading multiple pages, someone asks a question and receives a single answer. The system looks across the documentation and generates what it believes is the best response.</p><p>The AI does not truly understand which document represents the current truth. It simply works with the information it can access.</p><p>If the knowledge base contains outdated or conflicting material, the AI can still produce an answer that sounds complete and convincing. It could also depend on the predefined prompts the AI assistant is configured to use.</p><p>It brings us back to one of the oldest sayings in computing: <strong>garbage in, garbage out.</strong></p><p>The difference now is that the output can look very clear and convincing.</p><div><hr></div><h2><strong>The Archiving Assumption</strong></h2><p>Another part of our conversation stayed with me. My colleague mentioned that some of the documentation influencing the answers had already been archived. At least, that was the assumption.</p><p>In systems like Confluence, archiving usually moves a page out of the main content tree and removes it from normal search results. But the content itself still exists in the system unless it is deleted. In some situations, such as links, indexing behavior, or search inconsistencies, archived content can still appear or remain accessible to systems reading the underlying data.</p><p>From the perspective of the person maintaining the documentation, the page feels like it has been removed. From the perspective of the system analyzing the knowledge base, the information may still be there.</p><p>For the employee asking a question, none of that background is visible. They simply see an answer that appears clear and confident. Would you even know if the response was correct or not?</p><p>If the answer is presented as a fact, there is little reason for someone to question whether it came from documentation that no longer reflects the current way of working.</p><div><hr></div><h2><strong>Where Humans Still Matter</strong></h2><p>None of this means AI should be avoided in workplace knowledge systems.</p><p>Tools that help search and summarize documentation can be extremely useful. They reduce the time people spend digging through pages and help surface information that might otherwise stay hidden.</p><p>What changes is where human effort needs to go.</p><p>Instead of spending time searching for information, the more important work becomes maintaining the quality of the knowledge itself.</p><p>That starts with ownership. Important documentation should have someone responsible for reviewing it and keeping it accurate over time. Without clear ownership, outdated pages slowly build up and systems that rely on them will continue to use them.</p><p>Once an AI assistant starts answering questions, it can also help reveal problems in the documentation. One useful practice is asking documentation owners to occasionally test the system the same way an employee would. Instead of reading the page directly, they can ask the AI assistant questions about their own documentation and see what answer comes back. This allows them to see what users actually experience and catch situations where the system summarizes something incorrectly or pulls from outdated material.</p><p>AI can also help maintain the knowledge base itself. Organizations could use it to monitor documentation and flag content that has not been updated for a long time. Instead of aging quietly in the background, those pages could trigger reminders for a human review.</p><p>In this way, AI becomes part of the maintenance process, not just the interface for answers.</p><p>The important part is that the final responsibility still sits with people. AI can help surface information, highlight potential issues, and even suggest improvements. But someone still needs to decide whether the content reflects how the organization actually works today.</p><p>That is where the idea of a human in the loop becomes practical. AI can deliver knowledge faster than people could search for it themselves, but humans still need to confirm that the knowledge behind those answers is correct.</p><div><hr></div><h2><strong>A Small Shift in Responsibility</strong></h2><p>What stands out to me about this shift is that AI does not remove the responsibility for maintaining documentation. In many ways it makes that responsibility more visible.</p><p>Before AI, messy documentation mostly slowed people down. Someone might spend extra time searching, ask a colleague for help, or eventually find the right page after reading several others.</p><p>Now the system often provides an answer immediately. That speed is helpful, but it also means the quality of the knowledge behind the system matters even more.</p><p>AI can connect information, summarize documents, and surface answers in seconds. What it cannot do is decide whether that information still reflects how the organization actually works.</p><p><strong>That part still belongs to people.</strong></p><p>If AI becomes the layer between employees and the knowledge they rely on, then the quality of that knowledge becomes part of the organization&#8217;s infrastructure.</p><p>It is no longer just documentation, it becomes part of how decisions are made. <br>And when outdated information is presented with confidence, it can look correct to anyone who doesn&#8217;t already know the difference.</p>]]></content:encoded></item><item><title><![CDATA[All I See Are—Em Dashes]]></title><description><![CDATA[An Observation]]></description><link>https://jenniferlensborn.substack.com/p/all-i-see-areem-dashes</link><guid isPermaLink="false">https://jenniferlensborn.substack.com/p/all-i-see-areem-dashes</guid><dc:creator><![CDATA[Jennifer Lensborn]]></dc:creator><pubDate>Sun, 22 Feb 2026 15:00:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VSQD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8353430d-e617-48cd-8e1f-c905a6308909_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>An Observation</h3><p>Lately, I cannot scroll LinkedIn without noticing the em dash in product announcements, reflective leadership posts, AI commentary, and thoughtful threads that aim to sound conversational while remaining polished.</p><p>The punctuation mark itself is not new. It has long been one of the most flexible tools in written English. As explained by <a href="https://www.merriam-webster.com/grammar/em-dash-en-dash-how-to-use">Merriam-Webster</a>, &#8220;The em dash (&#8212;) can function like a comma, a colon, or parenthesis.&#8221;&#8212;signaling interruption, emphasis, or a shift in thought. When used intentionally, it adds rhythm and preserves the natural movement of a sentence in ways that stricter punctuation sometimes cannot.</p><p>This is not a complaint about grammar. This is my observation about saturation, what saturation can do to perception, and what in turn might change our culture of how we use or not use em dashes and other overused polished words due to AI all together.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VSQD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8353430d-e617-48cd-8e1f-c905a6308909_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VSQD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8353430d-e617-48cd-8e1f-c905a6308909_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!VSQD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8353430d-e617-48cd-8e1f-c905a6308909_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!VSQD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8353430d-e617-48cd-8e1f-c905a6308909_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!VSQD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8353430d-e617-48cd-8e1f-c905a6308909_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VSQD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8353430d-e617-48cd-8e1f-c905a6308909_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8353430d-e617-48cd-8e1f-c905a6308909_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2547663,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://jenniferlensborn.substack.com/i/188717219?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8353430d-e617-48cd-8e1f-c905a6308909_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VSQD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8353430d-e617-48cd-8e1f-c905a6308909_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!VSQD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8353430d-e617-48cd-8e1f-c905a6308909_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!VSQD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8353430d-e617-48cd-8e1f-c905a6308909_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!VSQD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8353430d-e617-48cd-8e1f-c905a6308909_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h3>What the Em Dash Used to Do</h3><p>My curiosity about the em dash&#8217;s frequent appearance led me back to older books on my shelf. In my copy of Treasure Island, the em dash appears frequently in dialogue:</p><blockquote><p>&#8220;Well, many&#8217;s the long night I&#8217;ve dreamed of cheese&#8212;toasted, mostly&#8212;and woke up again, and here I were.&#8221;</p></blockquote><p>The em dash in this case lets the sentence move the way the character is talking, indicating his interrupted speech and and abrupt change of sentence. It keeps the rhythm uneven and slightly messy, which makes it feel human. It is there to preserve voice, not to make the sentence look refined.</p><p>I also noticed that in these older editions the em dash sits tightly between words, without spaces. In newer writing, there is often space around it. The rules seem to shift depending on where and how something is published. Maybe punctuation just adapts to technology and taste.</p><p>That evolution does not concern me.</p><div><hr></div><h3>When Style Becomes a Tell</h3><p>My experience with language models is that they tend to produce polished, structured writing that values clarity and smooth flow. They connect ideas, they ease transitions, and often expand thoughts rather than shorten them. Over time, this creates a recognizable rhythm.</p><p>The em dash fits naturally into that rhythm because it lets a sentence shift direction without fully rewriting itself. It also adds emphasis without forcing the writer to be brief, and it gives the impression of reflection. Used occasionally, it feels thoughtful and when used repeatedly across millions of posts, it begins to feel familiar in a different way.</p><p>When large numbers of AI-assisted texts rely on similar rhythms and punctuation patterns, those patterns stop feeling like personal choices and start feeling like signatures. And once readers begin to associate those signatures with automation, credibility can quietly shift.</p><p>I wonder what kind of culture shift this will bring&#8230;</p><div><hr></div><h3>Reputational Drift</h3><p>What concerns me is not misuse of the em dash, but what happens when repetition changes meaning.</p><p>If readers start to associate the em dash with AI-generated text, writers may begin avoiding it. We have seen similar shifts before. For example, visual styles lose their uniqueness when they are overused, stock imagery becomes easy to spot, and certain corporate phrases have lost their impact once they are repeated too often. The meaning changes through overexposure.</p><p>The deeper risk is not that AI writes incorrectly, it is that it writes in ways that start to look the same. When repetition reduces contrast, readers adjust. And sometimes that adjustment means stepping away from punctuations or words that were once simply part of normal expression.</p><div><hr></div><h3>Human in the Loop as Cultural Steward</h3><p>Most conversations I read are about human oversight in AI focus on accuracy, bias, and safety. Those concerns are important, but if style can quietly change meaning through repetition, then oversight must include something more subtle: paying attention to how language shifts when patterns are repeated at scale.</p><p>AI systems do not just help us write, they repeat what they have learned. If a certain rhythm feels polished, it spreads. If a certain punctuation habit becomes common, it begins to look normal. Over time, repetition shapes expectation, and expectation changes how we interpret what we read.</p><p>This is where the human role matters. Not to only correct mistakes, but to notice sameness. And not only to approve content, but to decide when a sentence should be shorter, and when a dash should become a period, or when a thought should stand on its own. Human involvement does not have to be complicated, and it can be as simple as reading a draft once more and asking whether the rhythm feels chosen or automatic.</p><p>A small pause, a small edit, can slow that drift. These are small decisions, but at scale they protect variation instead of reinforcing sameness.</p><div><hr></div><h3>A Broader Pattern</h3><p>The em dash is easy to notice, which makes it a useful example. It gives us something concrete to point to, but the larger issue is not about one punctuation mark. It is about what happens when small writing habits are repeated so often that they begin to feel generated.</p><p>When AI systems generate text, they usually repeat patterns they have learned. If a certain rhythm or stylistic choice appears often enough, it can start to look like the default. Over time, readers will most likely connect those patterns with automation, even if the pattern itself was once a normal and expressive part of writing. I can almost hear it now: &#8220;Oh this reads like it was AI generated with all of these em dashes, maybe I shouldn&#8217;t use them in my post.&#8221; </p><p>And it&#8217;s not just em dashes. I have also read plenty of articles where people list words you shouldn&#8217;t use in your prompts because they are &#8220;AI words&#8221;. <br>Why?&#8212;these words are often used in AI outputs. &#8220;Sigh&#8221;</p><p>If repetition can slowly change how we interpret style, then it is worth asking how much of that repetition we want to let pass without reflection. The em dash may simply be visible right now because it is easy to spot. But it makes me wonder what other small habits, including certain words, are quietly becoming signals, and whether we will notice before they begin to shape how we write. </p><p>If a punctuation mark or even an ordinary word becomes strongly linked to automation, I can&#8217;t help but wonder how that might shape writing culture in the long run.</p>]]></content:encoded></item><item><title><![CDATA[Running AI at Home, and What Changed When I Took Privacy Seriously]]></title><description><![CDATA[A follow-up to &#8220;How I Work with AI &#8211; and Why Not All Conversations Belong in the Same Place&#8221;]]></description><link>https://jenniferlensborn.substack.com/p/running-ai-at-home-and-what-changed</link><guid isPermaLink="false">https://jenniferlensborn.substack.com/p/running-ai-at-home-and-what-changed</guid><dc:creator><![CDATA[Jennifer Lensborn]]></dc:creator><pubDate>Sun, 08 Feb 2026 17:16:27 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9b3581cd-c024-499a-9bf5-c99e62a74e1d_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In a previous post, <em><a href="https://open.substack.com/pub/jenniferlensborn/p/how-i-work-with-ai-and-why-not-all?utm_campaign=post-expanded-share&amp;utm_medium=web">How I Work with AI: Why Not All Conversations Belong in the Same Place</a></em>,   I wrote about how I&#8217;ve ended up using three different AI environments over time.</p><ul><li><p>ChatGPT for thinking and exploration.</p></li><li><p>AWS Bedrock with AnythingLLM for professional work.</p></li><li><p>And, eventually, local models for things that should never leave the house.</p></li></ul><p>I ended that post by saying I would come back once I had actually tested that last category.</p><p>This is that follow-up. Over the course of about a week, roughly eight hours spread across evenings, I tried to answer a very specific question:</p><p><em>Is it realistically possible to run a fully local AI setup for deeply personal content and actually trust it?</em></p><p>Not as a demo, not as a research project, but as something I could genuinely use.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7n3x!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe501c896-991f-4cb3-a3d1-b2b145ace155_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7n3x!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe501c896-991f-4cb3-a3d1-b2b145ace155_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!7n3x!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe501c896-991f-4cb3-a3d1-b2b145ace155_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!7n3x!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe501c896-991f-4cb3-a3d1-b2b145ace155_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!7n3x!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe501c896-991f-4cb3-a3d1-b2b145ace155_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7n3x!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe501c896-991f-4cb3-a3d1-b2b145ace155_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e501c896-991f-4cb3-a3d1-b2b145ace155_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3247332,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://jenniferlensborn.substack.com/i/187295501?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe501c896-991f-4cb3-a3d1-b2b145ace155_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7n3x!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe501c896-991f-4cb3-a3d1-b2b145ace155_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!7n3x!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe501c896-991f-4cb3-a3d1-b2b145ace155_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!7n3x!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe501c896-991f-4cb3-a3d1-b2b145ace155_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!7n3x!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe501c896-991f-4cb3-a3d1-b2b145ace155_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h3><strong>A Quick Orientation Before I Go Further</strong></h3><p>Before getting into what worked and what didn&#8217;t, it helps to be clear what I started with, what actually changed over the week, and under what circumstances.</p><p>I started with a fairly straightforward setup. My MacBook Pro M2 with 16 GB of memory, Ollama to pull local models and AnythingLLM as the interface. I had a single-pass approach where I tried to summarize everything at once, with early experiments using llama3.1:8b and qwen2.5:7b.</p><p>At this stage, my focus was on improving outcomes by adjusting prompts or swapping models, rather than questioning whether the overall structure made sense.</p><p>What I ended up with looked very different.</p><ul><li><p>Ollama as the local model runtime.</p></li><li><p>A small Python script to orchestrate the work.</p></li><li><p>Chunked processing, retries, and multi-stage summaries.</p></li><li><p>Different models assigned to different roles.</p></li></ul><p>As the structure improved, the model choices shifted with it. I moved from llama3.1:8b to mistral:7b-instruct for chunking and structured summaries, and from qwen2.5:7b to qwen2.5:14b for merging and higher-level synthesis.</p><p>Running all of this on my MacBook was the constraint. It&#8217;s why I stayed within these model ranges and focused on structure, traceability, and reliability rather than chasing larger models.</p><p>Looking back, that shift from asking the model to do everything to designing a process it could reliably execute is what set the rest of the week on the right track.</p><div><hr></div><h3><strong>Why I Wanted to Do This at All</strong></h3><p>This wasn&#8217;t driven by general AI curiosity. I already use AI every day.</p><p>What pushed me here was the nature of the data. I wanted help working with health and medical logs, school and family notes, and personal journals. Not to generate new content, but to summarize long, messy entries, surface patterns over time, and help me reason about what was already written.</p><p>Privacy wasn&#8217;t theoretical in this case. It was non-negotiable.</p><p>I wasn&#8217;t comfortable putting this kind of material into any cloud system, regardless of terms, promises, or safeguards. For this category of information, the only acceptable answer was that it stays on my machine.</p><p>At the same time, I didn&#8217;t want a fragile experiment that only worked on a good day. I wanted something I felt comfortable with.</p><div><hr></div><h3><strong>ChatGPT Was Part of the Process</strong></h3><p>I didn&#8217;t figure this out in isolation.</p><p>From the beginning, I used ChatGPT as a thinking partner while working through the setup. Not to ask for a perfect solution, but to talk through what I was trying to do, what wasn&#8217;t working, and what felt wrong about the outputs I was getting.</p><p>There were many back-and-forth conversations. I would try something, see it fail in subtle ways, describe the failure, adjust the approach, and try again.</p><p>In that sense, the process mirrored how I already use ChatGPT. It wasn&#8217;t there to replace my judgment. It helped me think through trade-offs, structure problems, and constraints until I saw the light at the end of the tunnel.</p><p>That matters, because this isn&#8217;t a story about abandoning cloud AI. It&#8217;s a story about using it on purpose to get to use something I could then run privately.</p><div><hr></div><h3><strong>Where It Got Frustrating</strong></h3><p>My first thoughts were to just dump everything in, ask for a summary, and wait.</p><p>Well&#8230; that didn&#8217;t work. Silly me.</p><p>Sometimes the output looked fine at first glance, but important early data was missing. Date ranges were wrong, summaries were far shorter than they should have been, and what made this uncomfortable wasn&#8217;t the failure itself, but how confident the failures looked.</p><p>There were moments where I genuinely wondered if this was worth the effort. It felt tedious to keep rerunning things and not get the result I wanted.</p><p>Around the same time, I realized that while AnythingLLM worked well for other contexts I use it in, it became friction once I needed tighter control. I needed chunking, retries, validation, and multi-stage processing. At that point, the interface slowed me down rather than helping.</p><p>Eventually, one thing became clear. This wasn&#8217;t a prompting problem, and it wasn&#8217;t a model problem. It was a <strong>structure</strong> problem.</p><div><hr></div><h3><strong>The Shift That Changed Everything</strong></h3><p>The breakthrough came when I stopped treating the data like a blob and started treating it like a book.</p><p>Instead of asking one model to understand everything at once, I split the work into stages. Small chunks processed consistently, summaries built on top of those, and higher-level synthesis layered above that.</p><p>This wasn&#8217;t solved with better prompts alone. I ended up using a small Python script to run the process. It handled chunking the data, running structured summaries, retrying when something failed, and then merging the results. Once that structure was in place, prompts still mattered, but they stopped being the primary bottleneck. Structure determined what was even possible; prompts fine-tuned the result.</p><p>I also stopped trying to make one model do everything. Some models were better at strict structure. Others were better at synthesis and pattern recognition.</p><p>Once each model had a clearly defined role, the outputs changed completely. They became complete, traceable, and trustworthy.</p><p>At every stage, I could move backwards from a high-level overview to a yearly summary, to a chunk, to the original entry. That mattered more to me than speed.</p><div><hr></div><h3><strong>When It Stopped Feeling Like an Experiment</strong></h3><p>Somewhere around the middle of the week, after chunking was in place and I allowed myself to use a slightly larger model for synthesis, I ran the pipeline again and immediately knew.</p><p><strong>This was it!</strong></p><p>The output matched what I had hoped for from the beginning. </p><ul><li><p>Nothing important missing. </p></li><li><p>No invented certainty. </p></li><li><p>No over-compression.</p></li></ul><p>From that point on, the work shifted. I wasn&#8217;t wondering whether this was viable anymore, I was just fine-tuning details.</p><p>This wasn&#8217;t instant either. A full run took a couple of hours on my laptop, not seconds, but it ran unattended and produced something I could actually trust.</p><p>That was the moment it stopped feeling like an experiment and started feeling like a tool.</p><div><hr></div><h3><strong>Locking It Down Without Making It Dramatic</strong></h3><p>All of this ran locally using Ollama as the model runtime.</p><p>Once the models were downloaded, I removed any ambiguity about where data could go. I blocked Ollama and AnythingLLM from all network access using the macOS firewall. I even blocked Terminal while running the Python script that ran the pipeline. That was probably overkill. What I wanted was:</p><ul><li><p>All files staying local.</p></li><li><p>No external APIs.</p></li><li><p>No background connections.</p></li><li><p>No searching for content on the internet.</p></li></ul><p>At no point did this data touch the cloud, not implicitly and not accidentally.</p><p>This wasn&#8217;t about paranoia. It was about being able to say, with confidence, that I knew where the walls were.</p><div><hr></div><h3><strong>What I Learned Along the Way</strong></h3><p>A few things became very clear for me.</p><ul><li><p>Bigger models reduce some failure modes, but they don&#8217;t compensate for missing structure.</p></li><li><p>Context limits fail quietly and confidently, often producing outputs that look reasonable while being incomplete or wrong.</p></li><li><p>Structure sets the ceiling for what prompting can achieve. Prompts still matter, but they can&#8217;t rescue an unstructured process.</p></li></ul><p>Trust did not come from how good the summaries looked. It came from being able to trace every conclusion back through layers to the original entries. Without that traceability, I wouldn&#8217;t have used the output at all.</p><p>One small but important lesson was about ambiguity. In early runs, the model tried to be helpful by expanding initials or resolving unclear references. This wasn&#8217;t acceptable.</p><p>I added explicit guardrails. Do not guess, do not expand, do not resolve ambiguity unless it is explicitly stated.</p><p>With this kind of data, preserving uncertainty is often the most honest outcome.</p><div><hr></div><h3><strong>Who This Is and Is Not For</strong></h3><p>This setup makes sense if you deal with sensitive personal or family data, want long-term summaries you can actually trust, and are comfortable reading scripts and thinking in systems.</p><p>It&#8217;s probably not worth it if you just want quick brainstorming, don&#8217;t care where data lives, or want something that works instantly with no setup.</p><p>This feels much closer to building a small personal tool than installing an app.</p><div><hr></div><h3><strong>Where I Have Landed</strong></h3><p>After a week of testing, I trust this approach for personal and family logs, the kind of information I would never want leaked.</p><p>I don&#8217;t think everyone needs to run local AI and I don&#8217;t think the cloud is inherently wrong.</p><p>I am more convinced than ever of the idea I ended my previous post with.</p><ul><li><p>Not all AI conversations deserve the same environment.</p></li><li><p>Some are low-risk.</p></li><li><p>Some are professional.</p></li><li><p>And some are personal enough to require walls, not windows.</p></li></ul><p>As AI becomes more embedded in daily life, as planners, journals, and confidants, privacy literacy matters more than ever.</p><p>Not fear, not paranoia, just thoughtful judgment about where information lives and who ultimately owns it.</p>]]></content:encoded></item><item><title><![CDATA[Agentic AI: What I’ve Learned and What Decision-Makers Should Prepare Before Talking to a Partner]]></title><description><![CDATA[Over the last months, I&#8217;ve spent significant time learning about agentic AI, not just as a technology, but as an enterprise capability.]]></description><link>https://jenniferlensborn.substack.com/p/agentic-ai-what-ive-learned-and-what</link><guid isPermaLink="false">https://jenniferlensborn.substack.com/p/agentic-ai-what-ive-learned-and-what</guid><dc:creator><![CDATA[Jennifer Lensborn]]></dc:creator><pubDate>Tue, 13 Jan 2026 13:56:16 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/39f8695a-9d36-41e2-b90c-d12bdd8e1064_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Over the last months, I&#8217;ve spent significant time learning about agentic AI, not just as a technology, but as an enterprise capability. What stood out most is that the biggest challenges rarely sit in models or tooling. They appear much earlier, before any architecture diagrams or proof-of-concepts exist.</p><p>Agentic AI represents a shift from isolated AI tools to interconnected, autonomous systems that reason, plan, act, and adapt over time. That shift changes not only what organizations build, but how decisions are governed, owned, and operationalized.</p><p><em>A note on perspective:</em><br>The platforms and examples referenced in this post are drawn from my own experience and recent training in an AWS context. This is not an endorsement of any single vendor, nor a claim that these approaches are exclusive to AWS. The principles discussed, including autonomy boundaries, governance, observability, partner readiness, and organizational preparation, apply regardless of cloud provider or technology stack.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fn0H!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dc677ce-fd60-4e3e-a845-b586ea8adb7b_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fn0H!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dc677ce-fd60-4e3e-a845-b586ea8adb7b_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!fn0H!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dc677ce-fd60-4e3e-a845-b586ea8adb7b_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!fn0H!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dc677ce-fd60-4e3e-a845-b586ea8adb7b_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!fn0H!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dc677ce-fd60-4e3e-a845-b586ea8adb7b_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fn0H!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dc677ce-fd60-4e3e-a845-b586ea8adb7b_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1dc677ce-fd60-4e3e-a845-b586ea8adb7b_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2135589,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://jenniferlensborn.substack.com/i/184433167?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dc677ce-fd60-4e3e-a845-b586ea8adb7b_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fn0H!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dc677ce-fd60-4e3e-a845-b586ea8adb7b_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!fn0H!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dc677ce-fd60-4e3e-a845-b586ea8adb7b_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!fn0H!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dc677ce-fd60-4e3e-a845-b586ea8adb7b_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!fn0H!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dc677ce-fd60-4e3e-a845-b586ea8adb7b_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2>From Generative AI to Agentic AI: Why This Matters</h2><p>Generative AI assists humans with tasks.<br>Agentic AI takes action.</p><p>Agentic systems:</p><ul><li><p>Maintain context over time</p></li><li><p>Coordinate multiple agents</p></li><li><p>Execute multi-step workflows</p></li><li><p>Adapt based on outcomes</p></li><li><p>Operate with increasing autonomy</p></li></ul><p>This unlocks significant value, but it also introduces new risk. Most failures do not come from hallucinations or model limitations. They come from unclear ownership, undefined autonomy, and missing operational guardrails.</p><div><hr></div><h2>The Question Decision-Makers Often Skip</h2><p>Before asking &#8220;Which partner should we work with?&#8221;, a more important question is:</p><p><strong>What are we actually ready for as an organization?</strong></p><p>Agentic AI does not create problems. It exposes existing ones:</p><ul><li><p>Data quality issues</p></li><li><p>Process gaps</p></li><li><p>Governance ambiguity</p></li><li><p>Risk tolerance mismatches</p></li></ul><p>The earlier these are surfaced, the more successful any implementation will be.</p><div><hr></div><h2>What Decision-Makers Should Clarify Internally First</h2><h3>1. Strategic Intent</h3><ul><li><p>What business outcome are we trying to improve?</p></li><li><p>Are we optimizing an existing process or redesigning it?</p></li><li><p>Is this exploratory, or are we expecting production impact?</p></li></ul><p>Agentic AI delivers the most value when goals are explicit.</p><div><hr></div><h3>2. Autonomy and Risk Boundaries</h3><ul><li><p>Which decisions may an agent make autonomously?</p></li><li><p>Which decisions must always involve human approval?</p></li><li><p>What is our tolerance for incorrect actions?</p></li><li><p>Who owns accountability when an agent acts?</p></li></ul><p>Autonomy without boundaries creates operational risk.</p><div><hr></div><h3>3. Data Readiness</h3><ul><li><p>Which systems will agents access?</p></li><li><p>Do we trust the quality of that data?</p></li><li><p>What data must never be accessed autonomously?</p></li></ul><p>Agentic AI amplifies both good and bad data.</p><div><hr></div><h3>4. Operating Model</h3><ul><li><p>Who monitors agents after deployment?</p></li><li><p>How are incidents handled?</p></li><li><p>How do we roll back incorrect actions?</p></li><li><p>How do we increase autonomy safely over time?</p></li></ul><p>Agentic AI requires ongoing operations, not just deployment.</p><div><hr></div><h2>What to Prepare Before Engaging a Partner</h2><p>Organizations move faster when they prepare a small but focused set of inputs:</p><ul><li><p>A concise problem statement</p></li><li><p>Defined autonomy limits</p></li><li><p>A high-level data inventory</p></li><li><p>Timeline expectations</p></li><li><p>Clear success criteria</p></li></ul><p>This preparation reduces friction and avoids misaligned expectations.</p><div><hr></div><h2>Questions Every Decision-Maker Should Ask a Partner</h2><h3>Production Readiness</h3><ul><li><p>How do you take agents from prototype to production?</p></li><li><p>How do you handle long-running workflows?</p></li><li><p>How do you manage agent state and memory?</p></li></ul><h3>Safety and Governance</h3><ul><li><p>How is human-in-the-loop implemented?</p></li><li><p>How do you prevent uncontrolled agent behavior?</p></li><li><p>How are confidence thresholds and rollback handled?</p></li></ul><h3>Observability</h3><ul><li><p>Can we trace why an agent made a decision?</p></li><li><p>Are agent actions logged and auditable?</p></li><li><p>How is agent performance measured over time?</p></li></ul><p>Clear answers here matter more than feature lists.</p><div><hr></div><h2>Choosing an Adoption Path</h2><p>Based on common enterprise patterns, organizations typically choose one of three paths.</p><h3>Ready-to-Use Solutions</h3><ul><li><p>Fast deployment</p></li><li><p>Lower risk</p></li><li><p>Limited customization</p></li></ul><h3>Custom-Built Agents</h3><ul><li><p>Full control</p></li><li><p>Higher investment</p></li><li><p>Greater long-term differentiation</p></li></ul><h3>Partner-Led Implementations</h3><ul><li><p>Balance between speed and customization</p></li><li><p>Industry-specific expertise</p></li><li><p>Reduced delivery risk</p></li></ul><p>The right choice depends on risk tolerance, internal capability, and desired differentiation.</p><div><hr></div><h2>Final Thought</h2><p>Agentic AI introduces new actors into enterprise systems. These are systems that make decisions, take actions, and learn over time.</p><p>Organizations that succeed:</p><ul><li><p>Think before they build</p></li><li><p>Define boundaries before autonomy</p></li><li><p>Treat agents as operational participants, not tools</p></li></ul><p>The most important work happens before the first partner meeting.</p>]]></content:encoded></item><item><title><![CDATA[The Joy of Learning Something New: My Experience With AWS Skill Builder]]></title><description><![CDATA[How a year of using AWS Skill Builder helped me expand my cloud knowledge, stay curious, and enjoy learning.]]></description><link>https://jenniferlensborn.substack.com/p/the-joy-of-learning-something-new</link><guid isPermaLink="false">https://jenniferlensborn.substack.com/p/the-joy-of-learning-something-new</guid><dc:creator><![CDATA[Jennifer Lensborn]]></dc:creator><pubDate>Tue, 02 Dec 2025 14:51:20 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/05da399d-b8ae-4272-8452-1119e39cf2fa_1463x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>I have worked with AWS services for almost six years</strong>, and for a little over a year now, I have been using AWS Skill Builder as part of my ongoing learning. Like many people in tech, I knew I should spend more time exploring newer capabilities, and sharpening my knowledge. Yet even with access to the platform, I did not dive into it right away.</p><p>When I finally committed to using it more intentionally, I expected just another set of videos to watch. What I found instead was an engaging, role based learning experience that made me wonder:</p><p><strong>&#8220;Why did I wait so long to do this?&#8221;</strong></p><div><hr></div><h3><strong>What Hooked Me: Role Based Learning and Hands On Labs</strong></h3><p>AWS Skill Builder&#8217;s structure stands out. You choose a learning path based on your goals or job responsibilities, and the platform guides you through content that builds in a logical order. Instead of being overwhelmed by hundreds of AWS services, you get a clear route that focuses your attention.</p><p>Along with the courses, the platform offers hands on labs that place you directly inside AWS environments. This is where Skill Builder becomes valuable. You are not just remembering information. You are deploying services, modifying configurations, troubleshooting, and learning from real outcomes without risking a production environment.</p><p>Each lab follows a three stage progression that helps the concepts stick:</p><p><strong>Learn</strong><br>Clear visual diagrams outline what you are about to build and how AWS services fit together.</p><p><strong>Practice</strong><br>You step into the AWS Console and walk through the implementation.</p><p><strong>DIY</strong><br>A slight twist forces you to apply what you have learned without step by step guidance.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Z9Kv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F042fb10e-4b74-4e99-ba59-482f74cb177d_2156x1201.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Z9Kv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F042fb10e-4b74-4e99-ba59-482f74cb177d_2156x1201.png 424w, https://substackcdn.com/image/fetch/$s_!Z9Kv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F042fb10e-4b74-4e99-ba59-482f74cb177d_2156x1201.png 848w, https://substackcdn.com/image/fetch/$s_!Z9Kv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F042fb10e-4b74-4e99-ba59-482f74cb177d_2156x1201.png 1272w, https://substackcdn.com/image/fetch/$s_!Z9Kv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F042fb10e-4b74-4e99-ba59-482f74cb177d_2156x1201.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Z9Kv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F042fb10e-4b74-4e99-ba59-482f74cb177d_2156x1201.png" width="640" height="356.4835164835165" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/042fb10e-4b74-4e99-ba59-482f74cb177d_2156x1201.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:811,&quot;width&quot;:1456,&quot;resizeWidth&quot;:640,&quot;bytes&quot;:1150662,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://jenniferlensborn.substack.com/i/180500020?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F042fb10e-4b74-4e99-ba59-482f74cb177d_2156x1201.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Z9Kv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F042fb10e-4b74-4e99-ba59-482f74cb177d_2156x1201.png 424w, https://substackcdn.com/image/fetch/$s_!Z9Kv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F042fb10e-4b74-4e99-ba59-482f74cb177d_2156x1201.png 848w, https://substackcdn.com/image/fetch/$s_!Z9Kv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F042fb10e-4b74-4e99-ba59-482f74cb177d_2156x1201.png 1272w, https://substackcdn.com/image/fetch/$s_!Z9Kv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F042fb10e-4b74-4e99-ba59-482f74cb177d_2156x1201.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Where Skill Builder sets the stage before you dive into the hands on work.</figcaption></figure></div><p>This structure moves you from understanding to action. It builds real confidence because you are not only shown what to do. You prove that you can do it.</p><div><hr></div><h3><strong>Reinforcing What I Know and Exploring What I Do Not</strong></h3><p>I began with the <strong>Cloud Quest Practitioner</strong> path because it aligned with my existing experience. It was familiar enough to be comfortable, yet detailed enough to be interesting. It refreshed concepts I already used while introducing areas I had not explored in depth.</p><p>I am now working through the <strong>Networking</strong> path. Networking is something I have used for years, yet I continue to find new perspectives and clarifications that improve my understanding. It is a good feeling when something you thought you were familiar with becomes clearer and more meaningful.</p><p>There is something satisfying about realizing that there is always more to learn, even in topics you work with regularly.</p><div><hr></div><h3><strong>When Game Meets Cloud: Pets, Drones, and Motivation</strong></h3><p>One of the biggest surprises is the addition of playful game elements in Cloud Quest, the interactive portion of Skill Builder. While completing quests, I encountered features that reward progress in unexpected ways. I have collected virtual pets by answering quizzes correctly and zapped drones that trigger short knowledge checks. Successful answers reward AWS themed cards that help with challenges.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jGrt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896511d9-b318-4ce2-9c95-d9a49d6303f3_2148x1205.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jGrt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896511d9-b318-4ce2-9c95-d9a49d6303f3_2148x1205.png 424w, https://substackcdn.com/image/fetch/$s_!jGrt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896511d9-b318-4ce2-9c95-d9a49d6303f3_2148x1205.png 848w, https://substackcdn.com/image/fetch/$s_!jGrt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896511d9-b318-4ce2-9c95-d9a49d6303f3_2148x1205.png 1272w, https://substackcdn.com/image/fetch/$s_!jGrt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896511d9-b318-4ce2-9c95-d9a49d6303f3_2148x1205.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jGrt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896511d9-b318-4ce2-9c95-d9a49d6303f3_2148x1205.png" width="644" height="361.36538461538464" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/896511d9-b318-4ce2-9c95-d9a49d6303f3_2148x1205.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:817,&quot;width&quot;:1456,&quot;resizeWidth&quot;:644,&quot;bytes&quot;:442919,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://jenniferlensborn.substack.com/i/180500020?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896511d9-b318-4ce2-9c95-d9a49d6303f3_2148x1205.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jGrt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896511d9-b318-4ce2-9c95-d9a49d6303f3_2148x1205.png 424w, https://substackcdn.com/image/fetch/$s_!jGrt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896511d9-b318-4ce2-9c95-d9a49d6303f3_2148x1205.png 848w, https://substackcdn.com/image/fetch/$s_!jGrt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896511d9-b318-4ce2-9c95-d9a49d6303f3_2148x1205.png 1272w, https://substackcdn.com/image/fetch/$s_!jGrt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F896511d9-b318-4ce2-9c95-d9a49d6303f3_2148x1205.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">My in-game Cloud Quest character wandering the learning city in search of the next quest.</figcaption></figure></div><p>These game elements never block your progress, and you can choose to ignore them. For me, they added moments of enjoyment that broke up the learning flow and made the experience memorable in a way that a traditional training portal usually does not.</p><div><hr></div><h3><strong>AWS Maintains Content</strong></h3><p>Early in my journey, I ran into a lab that did not match the current AWS Console. I could not complete it, and I assumed it would take weeks to correct.</p><p>I stepped away from it and did not check back until three days later. When I returned, the lab was already updated. It may have been fixed sooner, but the important part was that the issue did not linger.</p><p>This responsiveness matters. It shows that AWS pays attention to Skill Builder and keeps it aligned with the platform it teaches. When a learning tool stays current, the time you invest in it feels worthwhile.</p><div><hr></div><h3><strong>A Balanced View</strong></h3><p>Skill Builder is not perfect. The graphics in Cloud Quest feel dated, and the character movement on the map can be clunky at times. It has the structure of a quest style environment, but the visual quality and navigation do not quite match what many people might expect from modern quest based experiences. These elements do not ruin the learning, but they can occasionally distract from it.</p><p>The content, however, is strong. The labs, diagrams, and real console access build confidence that lasts beyond the lesson. That is where the platform proves its value.</p><div><hr></div><h3><strong>The Biggest Benefit I Have Gained</strong></h3><p><strong>Skill Builder has given me hands on experience with topics I had not explored deeply before, and it is refreshing to learn something new.</strong></p><p>I have connected services more clearly, reduced confusion in areas I once found complex, and strengthened my ability to work confidently with AWS.</p><p>That feels like meaningful progress.</p><div><hr></div><h3><strong>Would I Recommend AWS Skill Builder</strong></h3><p>Yes. Without hesitation. If you want to explore it yourself, you can find AWS Skill Builder here: https://skillbuilder.aws/</p><p>Whether you are new to AWS, preparing for a certification, or already using AWS services and want to expand your skill set, Skill Builder provides practical and focused paths that help you grow.</p><p>If your role involves AWS today, Skill Builder is also a smart way to explore services you do not currently use. Many people get comfortable with what they already know and miss opportunities to broaden their capabilities. Skill Builder encourages you to explore more of the platform in a guided way.</p><p>It is worth discussing with your manager as part of your professional development. Organizations benefit when employees understand the tools they work with. Skill Builder is a relatively small investment that can unlock a great deal of value for both you and your team.</p><p>You may not fall in love with every part of the interface, but you will appreciate what you walk away with.</p><div><hr></div><h3><strong>Why Continual Learning Matters</strong></h3><p>The most important lesson I took away from this experience is not about AWS itself. It is about the process of learning.</p><p>Skills do not improve on their own. You have to nurture them. When you explore something that interests you, or step slightly beyond your comfort zone, learning becomes energizing rather than draining.</p><p>Curiosity turns effort into progress.</p><p>Whether it is AWS Skill Builder or something else entirely, the idea is simple:</p><p><strong>Find something that engages you. Lean into it. Keep learning.</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!i8S0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cee811f-69e8-4787-b1f0-43d75322e463_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!i8S0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cee811f-69e8-4787-b1f0-43d75322e463_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!i8S0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cee811f-69e8-4787-b1f0-43d75322e463_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!i8S0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cee811f-69e8-4787-b1f0-43d75322e463_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!i8S0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cee811f-69e8-4787-b1f0-43d75322e463_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!i8S0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cee811f-69e8-4787-b1f0-43d75322e463_1536x1024.png" width="508" height="338.782967032967" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0cee811f-69e8-4787-b1f0-43d75322e463_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:508,&quot;bytes&quot;:2271757,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://jenniferlensborn.substack.com/i/180500020?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cee811f-69e8-4787-b1f0-43d75322e463_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!i8S0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cee811f-69e8-4787-b1f0-43d75322e463_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!i8S0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cee811f-69e8-4787-b1f0-43d75322e463_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!i8S0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cee811f-69e8-4787-b1f0-43d75322e463_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!i8S0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cee811f-69e8-4787-b1f0-43d75322e463_1536x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><strong>Lean into it. Keep learning.</strong></figcaption></figure></div>]]></content:encoded></item><item><title><![CDATA[How I Work with AI: Why Not All Conversations Belong in the Same Place]]></title><description><![CDATA[My journey from &#8220;just using AI&#8221; to understanding how and where it fits for me.]]></description><link>https://jenniferlensborn.substack.com/p/how-i-work-with-ai-and-why-not-all</link><guid isPermaLink="false">https://jenniferlensborn.substack.com/p/how-i-work-with-ai-and-why-not-all</guid><dc:creator><![CDATA[Jennifer Lensborn]]></dc:creator><pubDate>Mon, 24 Nov 2025 23:34:25 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/35bc7106-95f4-48ff-89b6-f66c5030cb45_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I spend my days leading and advising engineering teams, and my evenings unwinding: crocheting, playing video games, or diving into whatever new curiosity has grabbed my attention. For the last year or so, that curiosity has been AI,  not just at work, but across everything I do.</p><p>Over that time, after more than a year using ChatGPT, AWS Bedrock, and Anything LLM, I&#8217;ve noticed patterns in how I use each. These are <strong>my observations</strong>, not hard rules. This is how I see it.</p><blockquote><p>Even as AI evolves rapidly, how we use it has to evolve too.</p></blockquote><p>Because what started as a single tool in my workflow has gradually become <strong>three very different ones</strong>, each with its own purpose.</p><div><hr></div><h2>A Tool Is Only as Smart as Its Context</h2><p>There&#8217;s a saying I love:</p><blockquote><p><strong>A fool with a tool is still a fool.</strong></p></blockquote><p>The more I used different AI systems, the more I realized that <strong>context matters as much as capability</strong>.</p><p>The question stopped being:<br><strong>&#8220;What can the model do?&#8221;</strong><br>and became:<br><strong>&#8220;Where should this conversation happen?&#8221;</strong></p><div><hr></div><h2>1. ChatGPT: My Thinking Partner</h2><p>ChatGPT is where I go when I need to think out loud with something that can keep up. It&#8217;s fast, conversational, and surprisingly good at turning rough ideas into structured concepts.</p><ul><li><p>I&#8217;ve always kept <strong>security in mind</strong>: I don&#8217;t copy, paste, or share sensitive information.</p></li><li><p>For casual thinking, problem-solving, and general exploration, the cloud works fine.</p></li><li><p><strong>Privacy nuance:</strong> The difference between ChatGPT and AWS/Anything LLM is minimal for typical users. The real jump in privacy only happens when you go fully local.</p></li></ul><div><hr></div><h2>2. AWS + Anything LLM: The Professional Workhorse</h2><p>At work, I need slightly different rules: governance, audit trails, and data privacy. AWS Bedrock with Anything LLM gives me that.</p><ul><li><p>Summarize documents</p></li><li><p>Draft internal writing</p></li><li><p>Reason through workflows</p></li></ul><p>It&#8217;s not as frictionless as ChatGPT, but ideal for enterprise-level work. Most individuals won&#8217;t use advanced security features like IAM, VPC endpoints, or customer-managed keys, so for typical use AWS is mostly similar to ChatGPT in practical privacy.</p><div><hr></div><h2>3. Local Models: The Next Step</h2><p>Then there&#8217;s the third category: <strong>the things that should never leave the house</strong>.</p><ul><li><p>Journals</p></li><li><p>Family notes</p></li><li><p>Deeply personal content</p></li></ul><p>For that, the only real solution is a <strong>fully local model</strong>: self-hosted, no telemetry, no third-party logs. I haven&#8217;t deployed it yet, but I&#8217;m excited to get started.</p><p>It requires more work, hardware, and responsibility, but for content involving the people you care most about, it&#8217;s worth it.</p><div><hr></div><h2>The Big Lesson</h2><p>After over a year of using these platforms, I&#8217;ve come to a simple conclusion. <strong>This is just my view</strong>:</p><blockquote><p>Not all AI conversations deserve the same environment.</p></blockquote><ul><li><p>Some discussions are low-risk.</p></li><li><p>Some are professional.</p></li><li><p>Some are personal enough to require walls, not windows.</p></li></ul><p>As AI becomes part of daily life: planners, journals, confidants, we need a new skill: <strong>privacy literacy</strong>.</p><p>Not paranoia. Not fear.<br>Just thoughtful decision-making about <em>where</em> information lives and <em>who</em> ultimately owns it.</p><div><hr></div><h2>Where I&#8217;ve Landed</h2><ul><li><p><strong>ChatGPT</strong>: fast thinking, creative clarity</p></li><li><p><strong>AWS + Anything LLM</strong>: work, structured reasoning</p></li><li><p><strong>Local models</strong>: personal journaling and family data (coming soon)</p></li></ul><p>Three tools.<br>Three contexts.<br>A much healthier relationship between information and risk.</p><p>AI keeps evolving fast, and if we want to use it well, we can&#8217;t just learn new features. We have to learn new <strong>judgment</strong>.</p><p>Because a fool with a tool is still a fool&#8230; and AI isn&#8217;t here to replace that wisdom. <br>It&#8217;s here to make it matter even more.</p>]]></content:encoded></item></channel></rss>