What I think we will see in the future is company-wide analysis of anonymised communications with agents, and derivations of common pain points and themes based on that.
Ie, the derivation of “knowledge units” will be passive. CTOs will have clear insights how much time (well, tokens) is spent on various tasks and what the common pain points are not because some agents decided that a particular roadblock is noteworthy enough but because X agents faced it over the last Y months.
I trust that an LLM can fix a problem without the help of other agents that are barely different from it. What it lacks is the context to identify which problems are systemic and the means to fix systemic problems. For that you need aggregate data processing.
You analyze each conversation with an LLM: summarize it, add tags, identify problematic tools, etc. The metrics go to management, some docs are auto-generated and added to the company knowledge base like all other company docs.
It’s like what they do in support or sales. They have conversational data and they use it to improve processes. Now it’s possible with code without any sort of proactive inquiry from chatbots.
Who is “you” in the first sentence? A human or an LLM? It seems to me that only the latter would be practical, given the volume. But then I don’t understand how you trust it to identify the problems, while simultaneously not trusting LLMs to identify pain points and roadblocks.
An LLM. A coding LLM writes code with its tools for writing files, searching docs, reading skills for specific technologies and so on; and the analysis LLM processes all interactions, summarizes them, tags issues, tracks token use for various task types, and identifies patterns across many sessions.
oh man, can youimagine having this much faith in a statistical model that can be torpedo'd cause it doesn't differentiate consistently between a template, a command, and an instruction?
The article starts with the comparison of DSPy and LangChain monthly downloads and then wastes time comparing DSPy to hand-rolling basic infra, which is quite trivial in every barely mature setup.
I conjecture that the core value proposition of DSPy is its optimizer? Yet the article doesn't really touch it in any important way. How does it work? How would I integrate it into my production? Is it even worth it for usual use-cases? Adding a retry is not a problem, creating and maintaining an AI control plane is. LangChain provides services for observability, online and offline evaluation, prompt engineering, deployment, you name it.
You can see many people saying this in the comments :). I personally think this misses the core of what Dspy "is".
Dspy encourages you to write your code in a way that better enables optimization, yes (and provides direct abstractions for that). But this isn't in a sense unique to Dspy: you can get these same benefits by applying the right patterns.
And they are the patterns I just find people constantly implementing these without realizing it, and think they could benefit from understanding Dspy a bit better to make better implementations :)
The situation in which people exchange favors within their mutually beneficial personal networks seems to be the basic and typical way things function. It’s actually remarkable that we are able to resist this tendency and normalize fair and impartial institutions.
Are they even a developer? “Safety and alignment” as AI buzzwords are quite different from “security and privacy”. In any case, I wouldn’t take a random person with a sinecure job as exemplary of anything.
I had to vibe code a proxy to hide tokens from agents (https://github.com/vladimirkras/prxlocal) because I haven’t found any good solution either. I planned to add genai otel stuff that could be piped into some tool to view dialogues and tool calls and so on, but I haven’t found any good setup that doesn’t require lots of manual coding yet. It’s really weird that there are no solutions in that space.
Reactive DOM updates – When you change state, the compiler tracks dependencies and generates efficient update code. In WebCC C++, you manually manage every DOM operation and call flush().
JSX-like view syntax – Embedding HTML with expressions, conditionals (<if>), and loops (<for>) requires parser support. Doing this with C++ macros would be unmaintainable.
Scoped CSS – The compiler rewrites selectors and injects scope attributes automatically. In WebCC, you write all styling imperatively in C++.
Component lifecycle – init{}, mount{}, tick{}, view{} blocks integrate with the reactive system. WebCC requires manual event loop setup and state management.
Efficient array rendering – Array loops track elements by key, so adding/removing/reordering items only updates the affected DOM nodes. The compiler generates the diffing and patching logic automatically.
Fine-grained reactivity – The compiler analyzes which DOM nodes depend on which state variables, generating minimal update code that only touches affected elements.
From a DX perspective: Coi lets you write <button onclick={increment}>{count}</button> with automatic reactivity. WebCC is a low-level toolkit – Coi is a high-level language that compiles to it, handling the reactive updates and DOM boilerplate automatically.
These features require a new language because they need compiler-level integration – reactive tracking, CSS scoping, JSX-like templates, and efficient array updates can't be retrofitted into C++ without creating an unmaintainable mess of macros and preprocessors. A component-based declarative language is fundamentally better suited for building UIs than imperative C++.
Ie, the derivation of “knowledge units” will be passive. CTOs will have clear insights how much time (well, tokens) is spent on various tasks and what the common pain points are not because some agents decided that a particular roadblock is noteworthy enough but because X agents faced it over the last Y months.