We've moved from relyng solely on LLM-inferred values to a more robust approach in lead-value.ts. By analyzing both historical contact memory and defined business offers, the system now intelligently extracts currency mentions and matches them against known price points to provide a more accurate lead valuation.

We updated the system prompts across all Sidekick modes—Lead Analysis, Follow-up, and Auto-reply—to enforce a more natural, professional, and conversational human tone. The prompts now explicitly forbid corporate jargon, fluff, em dashes, and common AI writing patterns like the 'no x, no y, but z' structure. Additionally, we simplified the Lead Analysis output by removing the speculative leadValue field to ensure cleaner JSON responses.
We've replaced static system prompt customizations with a dynamic Supermemory layer, providing Pilot's Sidekick with deeper access to business knowledge and contact history. By leveraging Inngest for background synchronization and backfilling, the Sidekick can now retrieve highly relevant context at inference time, significantly improving personalization. We also deprecated sidekick_setting.systemPrompt in favor of this more scalable approach. 
We have simplified our pricing structure by removing the Pro and Enterprise tiers, making the platform completely free and open-source for all users. The pricing page now reflects this shift to a $0/forever model, including self-hosting capabilities and a direct link to our repository. We also updated the Open Graph and Twitter meta tags to better highlight our focus on AI-powered data platform cost intelligence. 
We've upgraded our intelligence layer with expert agents specialized in platform cost optimization for services like Snowflake, AWS, Databricks, and more. The router now dynamically matches user queries to the relevant expert using curated knowledge bases, providing more accurate, model-specific insights via streaming. Alongside this, we've revamped the overview dashboard with MoM trend analysis, anomaly detection, and expanded views for AWS and Databricks. 
We've refactored the push notification server by removing topic-based subscriptions and the associated broadcast functionality. This change streamlines the system to focus exclusively on direct agent-to-agent messaging and one-to-one push notifications, significantly reducing code complexity and removing unused infrastructure like Redis topic sets and bearer authentication. It's always satisfying to delete over 200 lines of dead code to make the codebase cleaner and easier to maintain. 
We've replaced the single-topic /push/topic endpoint with /push/topics, which now supports broadcasting to multiple topics in a single request. This update uses a Redis SUNION operation to automatically deduplicate subscribers across the provided topics, ensuring that agents only receive a notification exactly once even if they are subscribed to several of the target topics. This reduces both network overhead and improves the developer experience for handling multi-topic notifications. 
This release addresses a potential directory hashing collision by null-terminating path and hash entries in hashDirectoryRecursive. By explicitly separating these entries, we ensure that distinct directory structures produce unique hashes, improving the reliability of directory state tracking.
This release includes critical improvements to context handling, introducing signal-aware contexts and a new nil-context guard to improve application robustness. We've also addressed stability by implementing recovery mechanisms during partial application, ensuring a smoother experience under stress. These changes directly enhance reliability, making your service more resilient against unexpected termination signals. 
This patch release updates go-m1cpu to v0.2.1, addressing a VLA (Variable Length Array) compiler warning. Keeping dependencies updated ensures better compatibility and build health across different platforms. 
This release introduces granular control over review exclusions, including both global persistent settings and local plans for your review workflow. We've also streamlined the TUI experience by defaulting the destination to the selected source and updating UI terminology to "whole-folder" for better clarity. These changes should make navigating and configuring your reviews feel significantly more intuitive. 
This release introduces safety gates for the undo feature, blocking rewinds if there are pending unapplied transactions to ensure state consistency. We've also enhanced the TUI experience with better relative path readability in the plan viewer and added a quick-access shortcut for the current directory in the picker. These refinements aim to make navigating and managing file operations more intuitive and reliable. 
The CSS for primary buttons was updated to set the border color to solid black, improving visual consistency and contrast across the interface.
Updated src/style.css to unify the border appearance across the cards and steps components. This ensures more visual consistency in the UI layout by aligning the border definitions.
To address limitations in agent-driven task execution, we've added a native terminal pane directly into the UI, powered by ttyd. Users can now access a shell via a new navigation tab and configure custom URL parameters, such as tokens or startup arguments, directly through the updated settings page. This makes it much easier to perform manual environment setup or maintenance tasks that the AI assistant cannot safely handle alone. 
We've added new agent tools to allow for context-aware meme generation directly within our pull request and commit summaries. By integrating with memegen.link and persisting images to Supabase storage, agents can now create and embed relevant memes to keep communication fun and engaging. Check out the new workflow in our documentation and expect more personality in future updates! 
I've updated the FAQ section in index.html to provide clearer information for potential users. We've refined the language to better clarify the unique value proposition of Lattice compared to other available tools.
We've restructured the project to use Vite as our build tool, replacing the previous configuration to improve dev server startup times and hot module replacement. This migration simplifies our build pipeline and provides a more modern development experience. Expect a significantly snappier feedback loop during local development. 
Implemented a fresh, minimal landing page for Lattice, the AI Chief of Staff. The page outlines the value proposition for coaches looking to automate operations, details how the service works, and provides a clear CTA for applicants to apply for access.
To reduce noisy notifications from minor feed updates, I've introduced a minLineChanges configuration option. The scheduler now filters feed diffs, only pushing updates to the pns.1lattice.co endpoint if the number of changed lines meets or exceeds the configured threshold (defaulting to 5). This allows for smarter notification management on a per-feed or global level.